diff --git "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml" "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml" --- "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml" +++ "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml" @@ -7,1221 +7,1170 @@ http://www.rssboard.org/rss-specification en-us - Thu, 11 Dec 2025 05:00:08 +0000 + Fri, 12 Dec 2025 05:00:05 +0000 rss-help@arxiv.org - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 Saturday Sunday - Online Inference of Constrained Optimization: Primal-Dual Optimality and Sequential Quadratic Programming - https://arxiv.org/abs/2512.08948 - arXiv:2512.08948v1 Announce Type: new -Abstract: We study online statistical inference for the solutions of stochastic optimization problems with equality and inequality constraints. Such problems are prevalent in statistics and machine learning, encompassing constrained $M$-estimation, physics-informed models, safe reinforcement learning, and algorithmic fairness. We develop a stochastic sequential quadratic programming (SSQP) method to solve these problems, where the step direction is computed by sequentially performing a quadratic approximation of the objective and a linear approximation of the constraints. Despite having access to unbiased estimates of population gradients, a key challenge in constrained stochastic problems lies in dealing with the bias in the step direction. As such, we apply a momentum-style gradient moving-average technique within SSQP to debias the step. We show that our method achieves global almost-sure convergence and exhibits local asymptotic normality with an optimal primal-dual limiting covariance matrix in the sense of H\'ajek and Le Cam. In addition, we provide a plug-in covariance matrix estimator for practical inference. To our knowledge, the proposed SSQP method is the first fully online method that attains primal-dual asymptotic minimax optimality without relying on projection operators onto the constraint set, which are generally intractable for nonlinear problems. Through extensive experiments on benchmark nonlinear problems, as well as on constrained generalized linear models and portfolio allocation problems using both synthetic and real data, we demonstrate superior performance of our method, showing that the method and its asymptotic behavior not only solve constrained stochastic problems efficiently but also provide valid and practical online inference in real-world applications. - oai:arXiv.org:2512.08948v1 - stat.ML - cs.LG - math.OC + Adaptive Nonparametric Estimation via Kernel Transport on Group Orbits: Oracle Inequalities and Minimax Rates + https://arxiv.org/abs/2512.10049 + arXiv:2512.10049v1 Announce Type: new +Abstract: We develop a unified framework for nonparametric functional estimation based on kernel transport along orbits of discrete group actions, which we term \emph{Twin Spaces}. Given a base kernel $K$ and a group $G = \langle\varphi\rangle$ acting isometrically on the input space $E$, we construct a hierarchy of transported kernels $\{K_j\}_{j\geq 0}$ and a penalized model selection scheme satisfying a Kraft inequality. Our main contributions are threefold: (i) we establish non-asymptotic oracle inequalities for the penalized twin-kernel estimator with explicit constants; (ii) we introduce novel twin-regularity classes that capture smoothness along group orbits and prove that our estimator adapts to these classes; (iii) we show that the framework recovers classical minimax-optimal rates in the Euclidean setting while enabling improved rates when the target function exhibits orbital structure. The effective dimension $d_{\mathrm{eff}}$ governing the rates is characterized in terms of the quotient $G/L$, where $L$ is the subgroup preserving the base operation. Connections to wavelet methods, geometric quantization, and adaptive computation are discussed. + oai:arXiv.org:2512.10049v1 math.ST stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yihang Gao, Michael K. Ng, Michael W. Mahoney, Sen Na + http://creativecommons.org/licenses/by/4.0/ + Jocelyn Nembe - All Emulators are Wrong, Many are Useful, and Some are More Useful Than Others: A Reproducible Comparison of Computer Model Surrogates - https://arxiv.org/abs/2512.09060 - arXiv:2512.09060v1 Announce Type: new -Abstract: Accurate and efficient surrogate modeling is essential for modern computational science, and there are a staggering number of emulation methods to choose from. With new methods being developed all the time, comparing the relative strengths and weaknesses of different methods remains a challenge due to inconsistent benchmarking practices and (sometimes) limited reproducibility and transparency. In this work, we present a large-scale, fully reproducible comparison of $29$ distinct emulators across $60$ canonical test functions and $40$ real emulation datasets. To facilitate rigorous, apples-to-apples comparisons, we introduce the R package \texttt{duqling}, which streamlines reproducible simulation studies using a consistent, simple syntax, and automatic internal scaling of inputs. This framework allows researchers to compare emulators in a unified environment and makes it possible to replicate or extend previous studies with minimal effort, even across different publications. Our results provide detailed empirical insight into the strengths and weaknesses of state-of-the-art emulators and offer guidance for both method developers and practitioners selecting a surrogate for new data. We discuss best practices for emulator comparison and highlight how \texttt{duqling} can accelerate research in emulator design and application. - oai:arXiv.org:2512.09060v1 - stat.CO + LxCIM: a new rank-based binary classifier performance metric invariant to local exchange of classes + https://arxiv.org/abs/2512.10053 + arXiv:2512.10053v1 Announce Type: new +Abstract: Binary classification is one of the oldest, most prevalent, and studied problems in machine learning. However, the metrics used to evaluate model performance have received comparatively little attention. The area under the receiver operating characteristic curve (AUROC) has long been a standard choice for model comparison. Despite its advantages, AUROC is not always ideal, particularly for problems that are invariant to local exchange of classes (LxC), a new form of metric invariance introduced in this work. To address this limitation, we propose LxCIM (LxC-invariant metric), which is not only rank-based and invariant under local exchange of classes, but also intuitive, logically consistent, and always computable, while enabling more detailed analysis through the cumulative accuracy-decision rate curve. Moreover, LxCIM exhibits clear theoretical connections to AUROC, accuracy, and the area under the accuracy-decision rate curve (AUDRC). These relationships allow for multiple complementary interpretations: as a symmetric form of AUROC, a rank-based analogue of accuracy, or a more representative and more interpretable variant of AUDRC. Finally, we demonstrate the direct applicability of LxCIM to the bivariate causal discovery problem (which exhibits invariance to local exchange of classes) and show how it addresses the acknowledged limitations of existing metrics used in this field. All code and implementation details are publicly available at github.com/tiagobrogueira/Causal-Discovery-In-Exchangeable-Data. + oai:arXiv.org:2512.10053v1 stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Kellin N. Rumsey, Graham C. Gibson, Devin Francom, Reid Morris - - - Complementary strengths of the Neyman-Rubin and graphical causal frameworks - https://arxiv.org/abs/2512.09130 - arXiv:2512.09130v1 Announce Type: new -Abstract: This article contributes to the discussion on the relationship between the Neyman-Rubin and the graphical frameworks for causal inference. We present specific examples of data-generating mechanisms - such as those involving undirected or deterministic relationships and cycles - where analyses using a directed acyclic graph are challenging, but where the tools from the Neyman-Rubin causal framework are readily applicable. We also provide examples of data-generating mechanisms with M-bias, trapdoor variables, and complex front-door structures, where the application of the Neyman-Rubin approach is complicated, but the graphical approach is directly usable. The examples offer insights into commonly used causal inference frameworks and aim to improve comprehension of the languages for causal reasoning among a broad audience. - oai:arXiv.org:2512.09130v1 - stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + cs.LG + Fri, 12 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Tetiana Gorbach, Xavier de Luna, Juha Karvanen, Ingeborg Waernbaum + Tiago Brogueira, M\'ario A. T. Figueiredo - IntegralGP: Volumetric estimation of subterranean geochemical properties in mineral deposits by fusing assay data with different spatial supports - https://arxiv.org/abs/2512.09151 - arXiv:2512.09151v1 Announce Type: new -Abstract: This article presents an Integral Gaussian Process (IntegralGP) framework for volumetric estimation of subterranean properties in mineral deposits. It provides a unified representation for data with different spatial supports, which enables blasthole geochemical assays to be properly modelled as interval observations rather than points. This approach is shown to improve regression performance and boundary delineation. A core contribution is a description of the mathematical changes to the covariance expressions which allow these benefits to be realised. The gradient and anti-derivatives are obtained to facilitate learning of the kernel hyperparameters. Numerical stability issues are also discussed. To illustrate its application, an IntegralGP data fusion algorithm is described. The objective is to assimilate line-based blasthole assays and update a block model that provides long-range prediction of Fe concentration beneath the drilled bench. Heteroscedastic GP is used to fuse chemically compatible but spatially incongruous data with different resolutions and sample spacings. Domain knowledge embodied in the structure and empirical distribution of the block model must be generally preserved while local inaccuracies are corrected. Using validation measurements within the predicted bench, our experiments demonstrate an improvement in bench-below grade prediction performance. For material classification, IntegralGP fusion reduces the absolute error and model bias in categorical prediction, especially instances where waste blocks are mistakenly classified as high-grade. - oai:arXiv.org:2512.09151v1 + A Primer on Bayesian Parameter Estimation and Model Selection for Battery Simulators + https://arxiv.org/abs/2512.10055 + arXiv:2512.10055v1 Announce Type: new +Abstract: Physics-based battery modelling has emerged to accelerate battery materials discovery and performance assessment. Its success, however, is still hindered by difficulties in aligning models to experimental data. Bayesian approaches are a valuable tool to overcome these challenges, since they enable prior assumptions and observations to be combined in a principled manner that improves numerical conditioning. Here we introduce two new algorithms to the battery community, SOBER and BASQ, that greatly speed up Bayesian inference for parameterisation and model comparison. We showcase how Bayesian model selection allows us to tackle data observability, model identifiability, and data-informed model development together. We propose this approach for the search for battery models of novel materials. + oai:arXiv.org:2512.10055v1 stat.ME + physics.data-an stat.AP - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Expert Systems with Applications 298A (2026) 129429 - Anna Chlingaryan, Arman Melkumyan, Raymond Leung + Yannick Kuhn, Masaki Adachi, Micha Philipp, David A. Howey, Birger Horstmann - WTNN: Weibull-Tailored Neural Networks for survival analysis - https://arxiv.org/abs/2512.09163 - arXiv:2512.09163v1 Announce Type: new -Abstract: The Weibull distribution is a commonly adopted choice for modeling the survival of systems subject to maintenance over time. When only proxy indicators and censored observations are available, it becomes necessary to express the distribution's parameters as functions of time-dependent covariates. Deep neural networks provide the flexibility needed to learn complex relationships between these covariates and operational lifetime, thereby extending the capabilities of traditional regression-based models. Motivated by the analysis of a fleet of military vehicles operating in highly variable and demanding environments, as well as by the limitations observed in existing methodologies, this paper introduces WTNN, a new neural network-based modeling framework specifically designed for Weibull survival studies. The proposed architecture is specifically designed to incorporate qualitative prior knowledge regarding the most influential covariates, in a manner consistent with the shape and structure of the Weibull distribution. Through numerical experiments, we show that this approach can be reliably trained on proxy and right-censored data, and is capable of producing robust and interpretable survival predictions that can improve existing approaches. - oai:arXiv.org:2512.09163v1 - stat.ML - cs.LG + Classifying Metamorphic versus Single-Fold Proteins with Statistical Learning and AlphaFold2 + https://arxiv.org/abs/2512.10066 + arXiv:2512.10066v1 Announce Type: new +Abstract: The remarkable success of AlphaFold2 in providing accurate atomic-level prediction of protein structures from their amino acid sequence has transformed approaches to the protein folding problem. However, its core paradigm of mapping one sequence to one structure may only be appropriate for single-fold proteins with one stable conformation. Metamorphic proteins, which can adopt multiple distinct conformations, have conformational diversity that cannot be adequately modeled by AlphaFold2. Hence, classifying whether a given protein is metamorphic or single-fold remains a critical challenge for both laboratory experiments and computational methods. To address this challenge, we developed a novel classification framework by re-purposing AlphaFold2 to generate conformational ensembles via a multiple sequence alignment sampling method. From these ensembles, we extract a comprehensive set of features characterizing the conformational ensemble's modality and structural dispersion. A random forest classifier trained on a carefully curated benchmark dataset of known metamorphic and single-fold proteins achieves a mean AUC of 0.869 with cross-validation, demonstrating the effectiveness of our integrated approach. Furthermore, by applying our classifier to 600 randomly sampled proteins from the Protein Data Bank, we identified several potential metamorphic protein candidates -- including the 40S ribosomal protein S30, whose conformational change is crucial for its secondary function in antimicrobial defense. By combining AI-driven protein structure prediction with statistical learning, our work provides a powerful new approach for discovering metamorphic proteins and deepens our understanding of their role in their molecular function. + oai:arXiv.org:2512.10066v1 stat.AP - stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + cs.AI + Fri, 12 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Gabrielle Rives, Olivier Lopez, Nicolas Bousquet - - - Refuting "Debunking the GAMLSS Myth: Simplicity Reigns in Pulmonary Function Diagnostics" - https://arxiv.org/abs/2512.09179 - arXiv:2512.09179v1 Announce Type: new -Abstract: We read with interest the above article by Zavorsky (2025, Respiratory Medicine, doi:10.1016/j.rmed.2024.107836) concerning reference equations for pulmonary function testing. The author compares a Generalized Additive Model for Location, Scale, and Shape (GAMLSS), which is the standard adopted by the Global Lung Function Initiative (GLI), with a segmented linear regression (SLR) model, for pulmonary function variables. The author presents an interesting comparison; however there are some fundamental issues with the approach. We welcome this opportunity for discussion of the issues that it raises. The author's contention is that (1) SLR provides "prediction accuracies on par with GAMLSS"; and (2) the GAMLSS model equations are "complicated and require supplementary spline tables", whereas the SLR is "more straightforward, parsimonious, and accessible to a broader audience". We respectfully disagree with both of these points. - oai:arXiv.org:2512.09179v1 - stat.AP - Thu, 11 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - 10.1016/j.rmed.2025.108557 - Robert A. Rigby, Mikis D. Stasinopoulos, Achim Zeileis, Sanja Stanojevic, Gillian Heller, Fernanda de Bastiani, Thomas Kneib, Andreas Mayr, Reto Stauffer, Nikolaus Umlauf - - - Access to healthcare for people with Alzheimer's Diseases and related dementias - https://arxiv.org/abs/2512.09217 - arXiv:2512.09217v1 Announce Type: new -Abstract: Background: Alzheimer's Disease and Related Dementias (ADRD) affects millions worldwide. Significant disparities exist in ADRD diagnosis and care, disproportionately impacting minority and socioeconomically vulnerable populations Objective: In this study, we investigate the relationship between ADRD density and accessibility to healthcare. We identify underserved and overserved areas in Maryland based on diagnosed cases and mortality due to ADRD, focusing on geographic disparities in care. Methods: 2023 Maryland ADRD patients were identified using ICD-10 codes from. Accessibility was measured using the Kernel Density Two-Step Floating Catchment Area (KD2SFCA) method. The Gini index and t-tests were used to analyze disparities between urban and rural areas. Hot Spot Analysis Getis-Ord Gi* and local bivariate relationships analysis were applied to assess spatial correlations. Principal component analysis (PCA) was applied to calculate the health risk index. Results: Hospital accessibility was unevenly distributed. Mortality rates from ADRD were higher in underserved areas with fewer hospitals. Hot spot analysis shows eastern and southern Maryland have zones with high mortality per population and per ADRD patient, surrounded by similarly high-rate zones. Central Maryland shows lower death rates per patient but more hospital facilities. In eastern Maryland, higher poverty areas are surrounded by zones with lower accessibility and higher health risk indices. Conclusion: Hospital accessibility is unevenly distributed, creating major rural disparities. Underserved regions in terms of access to healthcare facilities, particularly in eastern and southern Maryland, exhibit high ADRD mortality rates despite low diagnosis rates. This suggests that many ADRD cases remain undiagnosed, underdiagnosed, or subject to delayed treatment. - oai:arXiv.org:2512.09217v1 - stat.AP - Thu, 11 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Saeed Saleh Namadi, Jie Chen, Deb Niemeier + Yongkai Chen, Samuel WK Wong, SC Kou - Prenatal alcohol exposure and child cognition: semi-continuous exposures, causal inference and evidence synthesis - https://arxiv.org/abs/2512.09237 - arXiv:2512.09237v1 Announce Type: new -Abstract: We address the challenge of causal inference status and the dose-response effects with a semi-continuous exposure. A two-stage approach is proposed using estimating equation for multiple outcomes with large sample properties derived for the resulting estimators. Homogeneity tests are developed to assess whether causal effects of exposure status and the dose-response effects are the same across multiple outcomes. A global homogeneity test is also developed to assess whether the effect of exposure status (exposed/not exposed) and the dose-response effect of the continuous exposure level are each equal across all outcomes. The methods of estimation and testing are rigorously evaluated in simulation studies and applied to a motivating study on the effects of prenatal alcohol exposure on childhood cognition defined by executive function (EF), academic achievement in math, and learning and memory (LM). - oai:arXiv.org:2512.09237v1 - stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + TwinKernel Estimation for Point Process Intensity Functions: Adaptive Nonparametric Methods via Orbital Regularity + https://arxiv.org/abs/2512.10068 + arXiv:2512.10068v1 Announce Type: new +Abstract: We develop TwinKernel methods for nonparametric estimation of intensity functions of point processes. Building on the general TwinKernel framework and combining it with martingale techniques for counting processes, we construct estimators that adapt to orbital regularity of the intensity function. Given a point process $N$ with intensity $\lambda$ and a cyclic group $G = \langle\varphi\rangle$ acting on the time/space domain, we transport kernels along group orbits to create a hierarchy of smoothed Nelson-Aalen type estimators. Our main results establish: (i) uniform consistency via martingale concentration inequalities; (ii) optimal convergence rates for intensities in twin-H\"older classes, with rates depending on the effective dimension $d_{\mathrm{eff}}$; (iii) adaptation to unknown smoothness through penalized model selection; (iv) automatic boundary bias correction via local polynomial extensions in twin coordinates; (v) minimax lower bounds showing rate optimality. We apply the methodology to hazard rate estimation under random censoring, where periodicity or other orbital structure in the hazard may arise from circadian rhythms, seasonal effects, or treatment schedules. Martingale central limit theorems yield asymptotic confidence bands. Simulation studies demonstrate 3--7$\times$ improvements over classical kernel hazard estimators when the intensity exhibits orbital regularity. + oai:arXiv.org:2512.10068v1 + math.ST + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Xiaoya Wang, Richard J. Cook, Yeying Zhu, Tugba Akkaya-Hocagil, R. Colin Carter, Sandra W. Jacobson, Joseph L. Jacobson, Louise M. Ryan + Jocelyn Nemb\'e - MoDaH achieves rate optimal batch correction - https://arxiv.org/abs/2512.09259 - arXiv:2512.09259v1 Announce Type: new -Abstract: Batch effects pose a significant challenge in the analysis of single-cell omics data, introducing technical artifacts that confound biological signals. While various computational methods have achieved empirical success in correcting these effects, they lack the formal theoretical guarantees required to assess their reliability and generalization. To bridge this gap, we introduce Mixture-Model-based Data Harmonization (MoDaH), a principled batch correction algorithm grounded in a rigorous statistical framework. - Under a new Gaussian-mixture-model with explicit parametrization of batch effects, we establish the minimax optimal error rates for batch correction and prove that MoDaH achieves this rate by leveraging the recent theoretical advances in clustering data from anisotropic Gaussian mixtures. This constitutes, to the best of our knowledge, the first theoretical guarantee for batch correction. Extensive experiments on diverse single-cell RNA-seq and spatial proteomics datasets demonstrate that MoDaH not only attains theoretical optimality but also achieves empirical performance comparable to or even surpassing those of state-of-the-art heuristics (e.g., Harmony, Seurat-V5, and LIGER), effectively balancing the removal of technical noise with the conservation of biological signal. - oai:arXiv.org:2512.09259v1 + Incorporating Partial Adherence for Estimation of Dynamic Treatment Regimes + https://arxiv.org/abs/2512.10069 + arXiv:2512.10069v1 Announce Type: new +Abstract: Dynamic Treatment Regimes (DTRs) provide a systematic framework for optimizing sequential decision-making in chronic disease management, where therapies must adapt to patients' evolving clinical profiles. Inverse probability weighting (IPW) is a cornerstone methodology for estimating regime values from observational data due to its intuitive formulation and established theoretical properties, yet standard IPW estimators face significant limitations, including variance instability and data inefficiency. A fundamental but underexplored source of inefficiency lies in the strict binary adherence criterion that fails to account for partial adherence, thereby discarding substantial data from individuals with even minimal deviations from the target regime. We propose two novel methodologies that relax the strict inclusion rule through flexible compatibility mechanisms. Both methods provide computationally tractable alternatives that can be easily integrated into existing IPW workflows, offering more efficient approaches to DTR estimation. Theoretical analysis demonstrates that both estimators preserve consistency while achieving superior finite-sample efficiency compared to standard IPW, and comprehensive simulation studies confirm improved stability. We illustrate the practical utility of our methods through an application to HIV treatment data from the AIDS Clinical Trials Group Study 175 (ACTG175). + oai:arXiv.org:2512.10069v1 stat.ME - math.ST - q-bio.GN - stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yang Cao, Zongming Ma + Chloe Si, David A. Stephens, Erica E. M. Moodie - Vaccine sieve analysis on deep sequencing data using competing risks Cox regression with failure type subject to misclassification - https://arxiv.org/abs/2512.09262 - arXiv:2512.09262v1 Announce Type: new -Abstract: Understanding how vaccines perform against different pathogen genotypes is crucial for developing effective prevention strategies, particularly for highly genetically diverse pathogens like HIV. Sieve analysis is a statistical framework used to determine whether a vaccine selectively prevents acquisition of certain genotypes while allowing breakthrough of other genotypes that evade immune responses. Traditionally, these analyses are conducted with a single sequence available per individual acquiring the pathogen. However, modern sequencing technology can provide detailed characterization of intra-individual viral diversity by capturing up to hundreds of pathogen sequences per person. In this work, we introduce methodology that extends sieve analysis to account for intra-individual viral diversity. Our approach estimates vaccine efficacy against viral populations with varying true (unobservable) frequencies of vaccine-mismatched mutations. To account for differential resolution of information from differing sequence counts per person, we use competing risks Cox regression with modeled causes of failure and propose an empirical Bayes approach for the classification model. Simulation studies demonstrate that our approach reduces bias, provides nominal confidence interval coverage, and improves statistical power compared to conventional methods. We apply our method to the HVTN 705 Imbokodo trial, which assessed the efficacy of a heterologous vaccine regimen in preventing HIV-1 acquisition. - oai:arXiv.org:2512.09262v1 - stat.ME - stat.AP - Thu, 11 Dec 2025 00:00:00 -0500 + Concentration of Measure under Diffeomorphism Groups: A Universal Framework with Optimal Coordinate Selection + https://arxiv.org/abs/2512.10075 + arXiv:2512.10075v1 Announce Type: new +Abstract: We establish a universal framework for concentration inequalities based on invariance under diffeomorphism groups. Given a probability measure $\mu$ on a space $E$ and a diffeomorphism $\psi: E \to F$, concentration properties transfer covariantly: if the pushforward $\psi_*\mu$ concentrates, so does $\mu$ in the pullback geometry. This reveals that classical concentration inequalities -- Hoeffding, Bernstein, Talagrand, Gaussian isoperimetry -- are manifestations of a single principle of \emph{geometric invariance}. The choice of coordinate system $\psi$ becomes a free parameter that can be optimized. We prove that for any distribution class $\Pc$, there exists an optimal diffeomorphism $\psi^*$ minimizing the concentration constant, and we characterize $\psi^*$ in terms of the Fisher-Rao geometry of $\Pc$. We establish \emph{strict improvement theorems}: for heavy-tailed or multiplicative data, the optimal $\psi$ yields exponentially tighter bounds than the identity. We develop the full theory including transportation-cost inequalities, isoperimetric profiles, and functional inequalities, all parametrized by the diffeomorphism group $\Diff(E)$. Connections to information geometry (Amari's $\alpha$-connections), optimal transport with general costs, and Riemannian concentration are established. Applications to robust statistics, multiplicative models, and high-dimensional inference demonstrate that coordinate optimization can improve statistical efficiency by orders of magnitude. + oai:arXiv.org:2512.10075v1 + math.ST + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - James Peng, Michal Juraska, Pamela A. Shaw, Peter B. Gilbert + Jocelyn Nemb\'e - Robust and Sparse Estimation of Unbounded Density Ratio under Heavy Contamination - https://arxiv.org/abs/2512.09266 - arXiv:2512.09266v1 Announce Type: new -Abstract: We examine the non-asymptotic properties of robust density ratio estimation (DRE) in contaminated settings. Weighted DRE is the most promising among existing methods, exhibiting doubly strong robustness from an asymptotic perspective. This study demonstrates that Weighted DRE achieves sparse consistency even under heavy contamination within a non-asymptotic framework. This method addresses two significant challenges in density ratio estimation and robust estimation. For density ratio estimation, we provide the non-asymptotic properties of estimating unbounded density ratios under the assumption that the weighted density ratio function is bounded. For robust estimation, we introduce a non-asymptotic framework for doubly strong robustness under heavy contamination, assuming that at least one of the following conditions holds: (i) contamination ratios are small, and (ii) outliers have small weighted values. This work provides the first non-asymptotic analysis of strong robustness under heavy contamination. - oai:arXiv.org:2512.09266v1 + The Interplay of Statistics and Noisy Optimization: Learning Linear Predictors with Random Data Weights + https://arxiv.org/abs/2512.10188 + arXiv:2512.10188v1 Announce Type: new +Abstract: We analyze gradient descent with randomly weighted data points in a linear regression model, under a generic weighting distribution. This includes various forms of stochastic gradient descent, importance sampling, but also extends to weighting distributions with arbitrary continuous values, thereby providing a unified framework to analyze the impact of various kinds of noise on the training trajectory. We characterize the implicit regularization induced through the random weighting, connect it with weighted linear regression, and derive non-asymptotic bounds for convergence in first and second moments. Leveraging geometric moment contraction, we also investigate the stationary distribution induced by the added noise. Based on these results, we discuss how specific choices of weighting distribution influence both the underlying optimization problem and statistical properties of the resulting estimator, as well as some examples for which weightings that lead to fast convergence cause bad statistical performance. + oai:arXiv.org:2512.10188v1 stat.ML cs.LG - Thu, 11 Dec 2025 00:00:00 -0500 + stat.CO + Fri, 12 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Ryosuke Nagumo, Hironori Fujisawa + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Gabriel Clara, Yazan Mash'al - On the inverse of covariance matrices for unbalanced crossed designs - https://arxiv.org/abs/2512.09273 - arXiv:2512.09273v1 Announce Type: new -Abstract: This paper addresses a long-standing open problem in the analysis of linear mixed models with crossed random effects under unbalanced designs: how to find an analytic expression for the inverse of $\mathbf{V}$, the covariance matrix of the observed response. The inverse matrix $\mathbf{V}^{-1}$ is required for likelihood-based estimation and inference. However, for unbalanced crossed designs, $\mathbf{V}$ is dense and the lack of a closed-form representation for $\mathbf{V}^{-1}$, until now, has made using likelihood-based methods computationally challenging and difficult to analyse mathematically. We use the Khatri--Rao product to represent $\mathbf{V}$ and then to construct a modified covariance matrix whose inverse admits an exact spectral decomposition. Building on this construction, we obtain an elegant and simple approximation to $\mathbf{V}^{-1}$ for asymptotic unbalanced designs. For non-asymptotic settings, we derive an accurate and interpretable approximation under mildly unbalanced data and establish an exact inverse representation as a low-rank correction to this approximation, applicable to arbitrary degrees of unbalance. Simulation studies demonstrate the accuracy, stability, and computational tractability of the proposed framework. - oai:arXiv.org:2512.09273v1 + Semiparametric rank-based regression models as robust alternatives to parametric mean-based counterparts for censored responses under detection-limit + https://arxiv.org/abs/2512.10212 + arXiv:2512.10212v1 Announce Type: new +Abstract: Detection limits are common in biomedical and environmental studies, where key covariates or outcomes are censored below an assay-specific threshold. Standard approaches such as complete-case analysis, single-value substitution, and parametric Tobit-type models are either inefficient or sensitive to distributional misspecification. + We study semiparametric rank-based regression models as robust alternatives to parametric mean-based counterparts for censored responses under detection limits. Our focus is on accelerated failure time (AFT) type formulations, where rank-based estimating equations yield consistent slope estimates without specifying the error distribution. We develop a unifying simulation framework that generates left- and right-censored data under several data-generating mechanisms, including normal, Weibull, and log-normal error structures, with detection limits or administrative censoring calibrated to target censoring rates between 10\% and 60\%. + Across scenarios, we compare semiparametric AFT estimators with parametric Weibull AFT, Tobit, and Cox proportional hazards models in terms of bias, empirical variability, and relative efficiency. Numerical results show that parametric models perform well only under correct specification, whereas rank-based semiparametric AFT estimators maintain near-unbiased covariate effects and stable precision even under heavy censoring and distributional misspecification. These findings support semiparametric rank-based regression as a practical default for censored regression with detection limits when the error distribution is uncertain. + Keywords: Semiparametric models, Estimating equations, Left censoring, Right censoring, Tobit regression, Efficiency + oai:arXiv.org:2512.10212v1 stat.ME - math.ST - stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ziyang Lyu, S. A. Sisson, A. H. Welsh - - - Impact of Positional Encoding: Clean and Adversarial Rademacher Complexity for Transformers under In-Context Regression - https://arxiv.org/abs/2512.09275 - arXiv:2512.09275v1 Announce Type: new -Abstract: Positional encoding (PE) is a core architectural component of Transformers, yet its impact on the Transformer's generalization and robustness remains unclear. In this work, we provide the first generalization analysis for a single-layer Transformer under in-context regression that explicitly accounts for a completely trainable PE module. Our result shows that PE systematically enlarges the generalization gap. Extending to the adversarial setting, we derive the adversarial Rademacher generalization bound. We find that the gap between models with and without PE is magnified under attack, demonstrating that PE amplifies the vulnerability of models. Our bounds are empirically validated by a simulation study. Together, this work establishes a new framework for understanding the clean and adversarial generalization in ICL with PE. - oai:arXiv.org:2512.09275v1 - stat.ML - cs.LG - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Weiyi He, Yue Xing + Y. Xu, S. Tu L. Shao, T. Lin, X. M. Tu - Distributional Shrinkage II: Optimal Transport Denoisers with Higher-Order Scores - https://arxiv.org/abs/2512.09295 - arXiv:2512.09295v1 Announce Type: new -Abstract: We revisit the signal denoising problem through the lens of optimal transport: the goal is to recover an unknown scalar signal distribution $X \sim P$ from noisy observations $Y = X + \sigma Z$, with $Z$ being standard Gaussian independent of $X$ and $\sigma>0$ a known noise level. Let $Q$ denote the distribution of $Y$. We introduce a hierarchy of denoisers $T_0, T_1, \ldots, T_\infty : \mathbb{R} \to \mathbb{R}$ that are agnostic to the signal distribution $P$, depending only on higher-order score functions of $Q$. Each denoiser $T_K$ is progressively refined using the $(2K-1)$-th order score function of $Q$ at noise resolution $\sigma^{2K}$, achieving better denoising quality measured by the Wasserstein metric $W(T_K \sharp Q, P)$. The limiting denoiser $T_\infty$ identifies the optimal transport map with $T_\infty \sharp Q = P$. - We provide a complete characterization of the combinatorial structure underlying this hierarchy through Bell polynomial recursions, revealing how higher-order score functions encode the optimal transport map for signal denoising. We study two estimation strategies with convergence rates for higher-order scores from i.i.d. samples drawn from $Q$: (i) plug-in estimation via Gaussian kernel smoothing, and (ii) direct estimation via higher-order score matching. This hierarchy of agnostic denoisers opens new perspectives in signal denoising and empirical Bayes. - oai:arXiv.org:2512.09295v1 + On Learning-Curve Monotonicity for Maximum Likelihood Estimators + https://arxiv.org/abs/2512.10220 + arXiv:2512.10220v1 Announce Type: new +Abstract: The property of learning-curve monotonicity, highlighted in a recent series of work by Loog, Mey and Viering, describes algorithms which only improve in average performance given more data, for any underlying data distribution within a given family. We establish the first nontrivial monotonicity guarantees for the maximum likelihood estimator in a variety of well-specified parametric settings. For sequential prediction with log loss, we show monotonicity (in fact complete monotonicity) of the forward KL divergence for Gaussian vectors with unknown covariance and either known or unknown mean, as well as for Gamma variables with unknown scale parameter. The Gaussian setting was explicitly highlighted as open in the aforementioned works, even in dimension 1. Finally we observe that for reverse KL divergence, a folklore trick yields monotonicity for very general exponential families. + All results in this paper were derived by variants of GPT-5.2 Pro. Humans did not provide any proof strategies or intermediate arguments, but only prompted the model to continue developing additional results, and verified and transcribed its proofs. + oai:arXiv.org:2512.10220v1 math.ST cs.LG stat.ML stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tengyuan Liang - - - Estimating order scale parameters of two scale mixture of exponential distributions - https://arxiv.org/abs/2512.09305 - arXiv:2512.09305v1 Announce Type: new -Abstract: Estimation of the ordered scale parameter of a two scale mixture of the exponential distribution is considered under Stein loss and symmetric loss. Under certain conditions, we prove that the inadmissibility equivariant estimator exhibits several improved estimators. Consequently, we propose various estimators that dominate the best affine equivariant estimators (BAEE). Also, we propose a class of estimators that dominates BAEE. We have proved that the boundary estimator of this class is a generalized Bayes estimator. The results are applied to the multivariate Lomax distribution and the Exponential Inverse Gaussian (E-IG) distribution. Consequently, we have obtained improved estimators for the ordered scale parameters of two multivariate Lomax distributions and the exponential inverse Gaussian distribution. For each case, we have conducted a simulation study to compare the risk performance of the improved estimators. - oai:arXiv.org:2512.09305v1 - math.ST - stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-sa/4.0/ - Somnath Mondal, Lakshmi Kanta Patra + Mark Sellke, Steven Yin - Group Cooperation Diverges onto Durable Low versus High Paths: Public Goods Experiments in 134 Honduran Villages - https://arxiv.org/abs/2512.09316 - arXiv:2512.09316v1 Announce Type: new -Abstract: We performed large, lab-in-the-field experiment (2,591 participants across 134 Honduran villages; ten rounds) and tracked how contribution behavior unfolds in fixed, anonymous groups of size five. Contribution separates early into two durable paths, one low and one high, with rare convergence thereafter. High-path players can be identified with strong accuracy early on. Groups that begin with an early majority of above-norm contributors (about 60%) are very likely finish high. The empirical finding of a bifurcation, consistent with the theory, shows that early, high contributions by socially central people steer groups onto, and help keep them on, a high-cooperation path. - oai:arXiv.org:2512.09316v1 + Time-Averaged Drift Approximations are Inconsistent for Inference in Drift Diffusion Models + https://arxiv.org/abs/2512.10250 + arXiv:2512.10250v1 Announce Type: new +Abstract: Drift diffusion models (DDMs) have found widespread use in computational neuroscience and other fields. They model evidence accumulation in simple decision tasks as a stochastic process drifting towards a decision barrier. In models where the drift rate is both time-varying within a trial and variable across trials, the high computational cost for accurate likelihood evaluation has led to the common use of a computationally convenient surrogate for parameter inference, the time-averaged drift approximation (TADA). In each trial, the TADA assumes that the time-varying drift rate can be replaced by its temporal average throughout the trial. This approach enables fast parameter inference using analytical likelihood formulas for DDMs with constant drift. In this work, we show that such an estimator is inconsistent: it does not converge to the true drift, posing a risk of biasing scientific conclusions drawn from parameter estimates produced by TADA and similar surrogates. We provide an elementary proof of this inconsistency in what is perhaps the simplest possible setting: a Brownian motion with piecewise constant drift hitting a one-sided upper boundary. Furthermore, we conduct numerical examples with an attentional DDM (aDDM) to show that the use of TADA systematically misestimates the effect of attention in decision making. + oai:arXiv.org:2512.10250v1 + stat.ME stat.AP - stat.OT - Thu, 11 Dec 2025 00:00:00 -0500 + stat.CO + Fri, 12 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Marios Papamichalis, Nicholas Christakis, Feng Fu + Sicheng Liu, Alexander Fengler, Michael J. Frank, Matthew T. Harrison - Balancing Weights for Causal Mediation Analysis - https://arxiv.org/abs/2512.09337 - arXiv:2512.09337v1 Announce Type: new -Abstract: This paper develops methods for estimating the natural direct and indirect effects in causal mediation analysis. The efficient influence function-based estimator (EIF-based estimator) and the inverse probability weighting estimator (IPW estimator), which are standard in causal mediation analysis, both rely on the inverse of the estimated propensity scores, and thus they are vulnerable to two key issues (i) instability and (ii) finite-sample covariate imbalance. We propose estimators based on the weights obtained by an algorithm that directly penalizes weight dispersion while enforcing approximate covariate and mediator balance, thereby improving stability and mitigating bias in finite samples. We establish the convergence rates of the proposed weights and show that the resulting estimators are asymptotically normal and achieve the semiparametric efficiency bound. Monte Carlo simulations demonstrate that the proposed estimator outperforms not only the EIF-based estimator and the IPW estimator but also the regression imputation estimator in challenging scenarios with model misspecification. Furthermore, the proposed method is applied to a real dataset from a study examining the effects of media framing on immigration attitudes. - oai:arXiv.org:2512.09337v1 + Peace Sells, But Whose Songs Connect? Bayesian Multilayer Network Analysis of the Big 4 of Thrash Metal + https://arxiv.org/abs/2512.10254 + arXiv:2512.10254v1 Announce Type: new +Abstract: We propose a Bayesian framework for multilayer song similarity networks and apply it to the complete studio discographies of the "Big 4" of thrash metal (Metallica, Slayer, Megadeth, Anthrax). Starting from raw audio, we construct four feature-specific layers (loudness, brightness, tonality, rhythm), augment them with song exogenous information, and represent each layer as a k-nearest neighbor graph. We then fit a family of hierarchical probit models with global and layer-specific baselines, node- and layer-specific sociability effects, dyadic covariates, and alternative forms of latent structure (bilinear, distance-based, and stochastic block communities), comparing increasingly flexible specifications using posterior predictive checks, discrimination and calibration metrics (AUC, Brier score, log-loss), and information criteria (DIC, WAIC). Across all bands, the richest stochastic block specification attains the best predictive performance and posterior predictive fit, while revealing sparse but structured connectivity, interpretable covariate effects (notably album membership and temporal proximity), and latent communities and hubs that cut across albums and eras. Taken together, these results illustrate how Bayesian multilayer network models can help organize high-dimensional audio and text features into coherent, musically meaningful patterns. + oai:arXiv.org:2512.10254v1 stat.ME - econ.EM - Thu, 11 Dec 2025 00:00:00 -0500 + math.ST + stat.AP + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Kentaro Kawato + Juan Sosa, Erika Mart\'inez, Danna L. Cruz-Reyes - Minimization of Functions on Dually Flat Spaces Using Geodesic Descent Based on Dual Connections - https://arxiv.org/abs/2512.09358 - arXiv:2512.09358v1 Announce Type: new -Abstract: We propose geodesic-based optimization methods on dually flat spaces, where the geometric structure of the parameter manifold is closely related to the form of the objective function. A primary application is maximum likelihood estimation in statistical models, especially exponential families, whose model manifolds are dually flat. We show that an m-geodesic update, which directly optimizes the log-likelihood, can theoretically reach the maximum likelihood estimator in a single step. In contrast, an e-geodesic update has a practical advantage in cases where the parameter space is geodesically complete, allowing optimization without explicitly handling parameter constraints. We establish the theoretical properties of the proposed methods and validate their effectiveness through numerical experiments. - oai:arXiv.org:2512.09358v1 - stat.CO + Error Analysis of Generalized Langevin Equations with Approximated Memory Kernels + https://arxiv.org/abs/2512.10256 + arXiv:2512.10256v1 Announce Type: new +Abstract: We analyze prediction error in stochastic dynamical systems with memory, focusing on generalized Langevin equations (GLEs) formulated as stochastic Volterra equations. We establish that, under a strongly convex potential, trajectory discrepancies decay at a rate determined by the decay of the memory kernel and are quantitatively bounded by the estimation error of the kernel in a weighted norm. Our analysis integrates synchronized noise coupling with a Volterra comparison theorem, encompassing both subexponential and exponential kernel classes. For first-order models, we derive moment and perturbation bounds using resolvent estimates in weighted spaces. For second-order models with confining potentials, we prove contraction and stability under kernel perturbations using a hypocoercive Lyapunov-type distance. This framework accommodates non-translation-invariant kernels and white-noise forcing, explicitly linking improved kernel estimation to enhanced trajectory prediction. Numerical examples validate these theoretical findings. + oai:arXiv.org:2512.10256v1 stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Gaku Omiya, Fumiyasu Komaki - - - Model-robust Inference for Seamless II/III Trials with Covariate Adaptive Randomization - https://arxiv.org/abs/2512.09430 - arXiv:2512.09430v1 Announce Type: new -Abstract: Seamless phase II/III trials have become a cornerstone of modern drug development, offering a means to accelerate evaluation while maintaining statistical rigor. However, most existing inference procedures are model-based, designed primarily for continuous outcomes, and often neglect the stratification used in covariate-adaptive randomization (CAR), limiting their practical relevance. In this paper, we propose a unified, model-robust framework for seamless phase II/III trials grounded in generalized linear models (GLMs), enabling valid inference across diverse outcome types, estimands, and CAR schemes. Using Z-estimation, we derive the asymptotic properties of treatment effect estimators and explicitly characterize how their variance depends on the underlying randomization procedure.Based on these results, we develop adjusted Wald tests that, together with Dunnett's multiple-comparison procedure and the inverse chi-square combination method, ensure valid overall Type I error. Extensive simulation studies and a trial example demonstrate that the proposed model-robust tests achieve superior power and reliable inference compared to conventional approaches. - oai:arXiv.org:2512.09430v1 - stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + cs.LG + cs.NA + math.DS + math.NA + math.PR + Fri, 12 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kun Yi, Lucy Xia + Quanjun Lang, Jianfeng Lu - Multiply-robust Estimator of Cumulative Incidence Function Difference for Right-Censored Competing Risks Data - https://arxiv.org/abs/2512.09433 - arXiv:2512.09433v1 Announce Type: new -Abstract: In causal inference, estimating the average treatment effect is a central objective, and in the context of competing risks data, this effect can be quantified by the cause-specific cumulative incidence function (CIF) difference. While doubly robust estimators give a more robust way to estimate the causal effect from the observational study, they remain inconsistent if both models are misspecified. To improve the robustness, we develop a multiply robust estimator for the difference in cause-specific CIFs using right-censored competing risks data. The proposed framework integrates the pseudo-value approach, which transforms the censored, time-dependent CIF into a complete-data outcome, with the multiply robust estimation framework. By specifying multiple candidate models for both the propensity score and the outcome regression, the resulting estimator is consistent and asymptotically unbiased, provided that at least one of the multiple propensity score or outcome regression models is correctly specified. Simulation studies show our multiply robust estimator remains virtually unbiased and maintains nominal coverage rates under various model misspecification scenarios and a wide range of choices for the censoring rate. Finally, the proposed multiply robust model is illustrated using the Right Heart Catheterization dataset. - oai:arXiv.org:2512.09433v1 + Alpha Power Harris-G Family of Distributions: Properties and Application to Burr XII Distribution + https://arxiv.org/abs/2512.10276 + arXiv:2512.10276v1 Announce Type: new +Abstract: This study introduces a new family of probability distributions, termed the alpha power Harris-generalized (APHG) family. The generator arises by incorporating two shape parameters from the Harris-G framework into the alpha power transformation, resulting in a more flexible class for modelling survival and reliability data. A special member of this family, obtained using the two-parameter Burr XII distribution as the baseline, is developed and examined in detail. Several analytical properties of the proposed alpha power Harris Burr XII (APHBXII) model are derived, which include closed-form expressions for its moments, mean and median deviations, Bonferroni and Lorenz curves, order statistics, and Renyi and Tsallis entropies. Parameter estimation is performed via maximum likelihood, and a Monte Carlo simulation study is carried out to assess the finite-sample performance of the estimators. In addition, three real lifetime datasets are analyzed to evaluate the empirical performance of the APHBXII distribution relative to four competing models. The results show that the five-parameter APHBXII model provides superior fit across all datasets, as supported by model-selection criteria and goodness-of-fit statistics. + oai:arXiv.org:2512.10276v1 + stat.AP + math.ST stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yifei Tian, Ying Wu + Gbenga A. Olalude, Taiwo A. Ojurongbe, Olalekan A. Bello, Kehinde A. Bashiru, Kazeem A. Alamu - Estimation of Stochastic Optimal Transport Maps - https://arxiv.org/abs/2512.09499 - arXiv:2512.09499v1 Announce Type: new -Abstract: The optimal transport (OT) map is a geometry-driven transformation between high-dimensional probability distributions which underpins a wide range of tasks in statistics, applied probability, and machine learning. However, existing statistical theory for OT map estimation is quite restricted, hinging on Brenier's theorem (quadratic cost, absolutely continuous source) to guarantee existence and uniqueness of a deterministic OT map, on which various additional regularity assumptions are imposed to obtain quantitative error bounds. In many real-world problems these conditions fail or cannot be certified, in which case optimal transportation is possible only via stochastic maps that can split mass. To broaden the scope of map estimation theory to such settings, this work introduces a novel metric for evaluating the transportation quality of stochastic maps. Under this metric, we develop computationally efficient map estimators with near-optimal finite-sample risk bounds, subject to easy-to-verify minimal assumptions. Our analysis further accommodates common forms of adversarial sample contamination, yielding estimators with robust estimation guarantees. Empirical experiments are provided which validate our theory and demonstrate the utility of the proposed framework in settings where existing theory fails. These contributions constitute the first general-purpose theory for map estimation, compatible with a wide spectrum of real-world applications where optimal transport may be intrinsically stochastic. - oai:arXiv.org:2512.09499v1 + Diffusion differentiable resampling + https://arxiv.org/abs/2512.10401 + arXiv:2512.10401v1 Announce Type: new +Abstract: This paper is concerned with differentiable resampling in the context of sequential Monte Carlo (e.g., particle filtering). We propose a new informative resampling method that is instantly pathwise differentiable, based on an ensemble score diffusion model. We prove that our diffusion resampling method provides a consistent estimate to the resampling distribution, and we show by experiments that it outperforms the state-of-the-art differentiable resampling methods when used for stochastic filtering and parameter estimation. + oai:arXiv.org:2512.10401v1 stat.ML cs.LG math.ST stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sloan Nietert, Ziv Goldfeld + Jennifer Rosina Andersson, Zheng Zhao - Calibration with Bagging of the Principal Components on a Large Number of Auxiliary Variables - https://arxiv.org/abs/2512.09505 - arXiv:2512.09505v1 Announce Type: new -Abstract: Calibration is a widely used method in survey sampling to adjust weights so that estimated totals of some chosen calibration variables match known population totals or totals obtained from other sources. When a large number of auxiliary variables are included as calibration variables, the variance of the total estimator can increase, and the calibration weights can become highly dispersed. To address these issues, we propose a solution inspired by bagging and principal component decomposition. With our approach, the principal components of the auxiliary variables are constructed. Several samples of calibration variables are selected without replacement and with unequal probabilities from among the principal components. For each sample, a system of weights is obtained. The final weights are the average weights of these different weighting systems. With our proposed method, it is possible to calibrate exactly for some of the main auxiliary variables. For the other auxiliary variables, the weights cannot be calibrated exactly. The proposed method allows us to obtain a total estimator whose variance does not explode when new auxiliary variables are added and to obtain very low scatter weights. Finally, our proposed method allows us to obtain a single weighting system that can be applied to several variables of interest of a survey. - oai:arXiv.org:2512.09505v1 - stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Caren Hasler, Arnaud Tripet, Yves Till\'e - - - Transformers for Tabular Data: A Training Perspective of Self-Attention via Optimal Transport - https://arxiv.org/abs/2512.09530 - arXiv:2512.09530v1 Announce Type: new -Abstract: This thesis examines self-attention training through the lens of Optimal Transport (OT) and develops an OT-based alternative for tabular classification. The study tracks intermediate projections of the self-attention layer during training and evaluates their evolution using discrete OT metrics, including Wasserstein distance, Monge gap, optimality, and efficiency. Experiments are conducted on classification tasks with two and three classes, as well as on a biomedical dataset. - Results indicate that the final self-attention mapping often approximates the OT optimal coupling, yet the training trajectory remains inefficient. Pretraining the MLP section on synthetic data partially improves convergence but is sensitive to their initialization. To address these limitations, an OT-based algorithm is introduced: it generates class-specific dummy Gaussian distributions, computes an OT alignment with the data, and trains an MLP to generalize this mapping. The method achieves accuracy comparable to Transformers while reducing computational cost and scaling more efficiently under standardized inputs, though its performance depends on careful dummy-geometry design. All experiments and implementations are conducted in R. - oai:arXiv.org:2512.09530v1 + Supervised Learning of Random Neural Architectures Structured by Latent Random Fields on Compact Boundaryless Multiply-Connected Manifolds + https://arxiv.org/abs/2512.10407 + arXiv:2512.10407v1 Announce Type: new +Abstract: This paper introduces a new probabilistic framework for supervised learning in neural systems. It is designed to model complex, uncertain systems whose random outputs are strongly non-Gaussian given deterministic inputs. The architecture itself is a random object stochastically generated by a latent anisotropic Gaussian random field defined on a compact, boundaryless, multiply-connected manifold. The goal is to establish a novel conceptual and mathematical framework in which neural architectures are realizations of a geometry-aware, field-driven generative process. Both the neural topology and synaptic weights emerge jointly from a latent random field. A reduced-order parameterization governs the spatial intensity of an inhomogeneous Poisson process on the manifold, from which neuron locations are sampled. Input and output neurons are identified via extremal evaluations of the latent field, while connectivity is established through geodesic proximity and local field affinity. Synaptic weights are conditionally sampled from the field realization, inducing stochastic output responses even for deterministic inputs. To ensure scalability, the architecture is sparsified via percentile-based diffusion masking, yielding geometry-aware sparse connectivity without ad hoc structural assumptions. Supervised learning is formulated as inference on the generative hyperparameters of the latent field, using a negative log-likelihood loss estimated through Monte Carlo sampling from single-observation-per-input datasets. The paper initiates a mathematical analysis of the model, establishing foundational properties such as well-posedness, measurability, and a preliminary analysis of the expressive variability of the induced stochastic mappings, which support its internal coherence and lay the groundwork for a broader theory of geometry-driven stochastic learning. + oai:arXiv.org:2512.10407v1 stat.ML cs.LG - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Antonio Candelieri, Alessandro Quadrio + Christian Soize - Don't Throw Away Your Beams: Improving Consistency-based Uncertainties in LLMs via Beam Search - https://arxiv.org/abs/2512.09538 - arXiv:2512.09538v1 Announce Type: new -Abstract: Consistency-based methods have emerged as an effective approach to uncertainty quantification (UQ) in large language models. These methods typically rely on several generations obtained via multinomial sampling, measuring their agreement level. However, in short-form QA, multinomial sampling is prone to producing duplicates due to peaked distributions, and its stochasticity introduces considerable variance in uncertainty estimates across runs. We introduce a new family of methods that employ beam search to generate candidates for consistency-based UQ, yielding improved performance and reduced variance compared to multinomial sampling. We also provide a theoretical lower bound on the beam set probability mass under which beam search achieves a smaller error than multinomial sampling. We empirically evaluate our approach on six QA datasets and find that its consistent improvements over multinomial sampling lead to state-of-the-art UQ performance. - oai:arXiv.org:2512.09538v1 + Maximum Risk Minimization with Random Forests + https://arxiv.org/abs/2512.10445 + arXiv:2512.10445v1 Announce Type: new +Abstract: We consider a regression setting where observations are collected in different environments modeled by different data distributions. The field of out-of-distribution (OOD) generalization aims to design methods that generalize better to test environments whose distributions differ from those observed during training. One line of such works has proposed to minimize the maximum risk across environments, a principle that we refer to as MaxRM (Maximum Risk Minimization). In this work, we introduce variants of random forests based on the principle of MaxRM. We provide computationally efficient algorithms and prove statistical consistency for our primary method. Our proposed method can be used with each of the following three risks: the mean squared error, the negative reward (which relates to the explained variance), and the regret (which quantifies the excess risk relative to the best predictor). For MaxRM with regret as the risk, we prove a novel out-of-sample guarantee over unseen test distributions. Finally, we evaluate the proposed methods on both simulated and real-world data. + oai:arXiv.org:2512.10445v1 stat.ML - cs.CL + cs.AI cs.LG - Thu, 11 Dec 2025 00:00:00 -0500 + stat.ME + Fri, 12 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Ekaterina Fadeeva, Maiya Goloburda, Aleksandr Rubashevskii, Roman Vashurin, Artem Shelmanov, Preslav Nakov, Mrinmaya Sachan, Maxim Panov + Francesco Freni, Anya Fries, Linus K\"uhne, Markus Reichstein, Jonas Peters - A Bayesian Approach for Robust Longitudinal Envelope Models - https://arxiv.org/abs/2512.09553 - arXiv:2512.09553v1 Announce Type: new -Abstract: The envelope model provides a dimension-reduction framework for multivariate linear regression. However, existing envelope methods typically assume normally distributed random errors and do not accommodate repeated measures in longitudinal studies. To address these limitations, we propose the robust longitudinal envelope model (RoLEM). RoLEM employs a scale mixture of matrix-variate normal distributions to model random errors, allowing it to handle potential outliers, and incorporates flexible correlation structures for repeated measurements. In addition, we introduce new prior and proposal distributions on the Grassmann manifold to facilitate Bayesian inference for RoLEM. Simulation studies and real data analysis demonstrate the superior performance of the proposed method. - oai:arXiv.org:2512.09553v1 + Long memory network time series + https://arxiv.org/abs/2512.10446 + arXiv:2512.10446v1 Announce Type: new +Abstract: Many scientific areas, from computer science to the environmental sciences and finance, give rise to multivariate time series which exhibit long memory, or loosely put, a slow decay in their autocorrelation structure. Efficient modelling and estimation in such settings is key for a number of analysis tasks, such as accurate prediction. However, traditional approaches for modelling such data, for example long memory vector autoregressive processes, are challenging even in modest dimensions, as the number of parameters grows quadratically with the number of modelled variables. Additionally, in many practical data settings, the observed series is accompanied by a (possibly inferred) network that provides information about the presence or absence of between-component associations via the graph edge topology. This article proposes two new models for capturing the dynamics of long memory time series where a network is accounted for. Our approach not only facilitates the analysis of graph-structured long memory time series, but also improves computational efficiency over traditional multivariate long memory models by leveraging the inherent low-dimensional parameter space by adapting likelihood-based estimation algorithms to the network setting. Simulation studies show that our proposed estimation is more stable than traditional models, and is able to tackle data scenarios where current models fail due to computational challenges. While widely applicable, here we demonstrate the efficacy of our proposed models on datasets arising in environmental science and finance. + oai:arXiv.org:2512.10446v1 stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Peng Zeng, Yushan Mu + http://creativecommons.org/licenses/by/4.0/ + Chiara Boetti, Matthew A. Nunes, Marina I. Knight - Neural posterior inference with state-space models for calibrating ice sheet simulators - https://arxiv.org/abs/2512.09561 - arXiv:2512.09561v1 Announce Type: new -Abstract: Ice sheet models are routinely used to quantify and project an ice sheet's contribution to sea level rise. In order for an ice sheet model to generate realistic projections, its parameters must first be calibrated using observational data; this is challenging due to the nonlinearity of the model equations, the high dimensionality of the underlying parameters, and limited data availability for validation. This study leverages the emerging field of neural posterior approximation for efficiently calibrating ice sheet model parameters and boundary conditions. We make use of a one-dimensional (flowline) Shallow-Shelf Approximation model in a state-space framework. A neural network is trained to infer the underlying parameters, namely the bedrock elevation and basal friction coefficient along the flowline, based on observations of ice velocity and ice surface elevation. Samples from the approximate posterior distribution of the parameters are then used within an ensemble Kalman filter to infer latent model states, namely the ice thickness along the flowline. We show through a simulation study that our approach yields more accurate estimates of the parameters and states than a state-augmented ensemble Kalman filter, which is the current state-of-the-art. We apply our approach to infer the bed elevation and basal friction along a flowline in Thwaites Glacier, Antarctica. - oai:arXiv.org:2512.09561v1 - stat.AP - Thu, 11 Dec 2025 00:00:00 -0500 + Learning Time-Varying Correlation Networks with FDR Control via Time-Varying P-values + https://arxiv.org/abs/2512.10467 + arXiv:2512.10467v1 Announce Type: new +Abstract: This paper presents a systematic framework for controlling false discovery rate in learning time-varying correlation networks from high-dimensional, non-linear, non-Gaussian and non-stationary time series with an increasing number of potential abrupt change points in means. We propose a bootstrap-assisted approach to derive dependent and time-varying P-values from a robust estimate of time-varying correlation functions, which are not sensitive to change points. Our procedure is based on a new high-dimensional Gaussian approximation result for the uniform approximation of P-values across time and different coordinates. Moreover, we establish theoretically guaranteed Benjamini--Hochberg and Benjamini--Yekutieli procedures for the dependent and time-varying P-values, which can achieve uniform false discovery rate control. The proposed methods are supported by rigorous mathematical proofs and simulation studies. We also illustrate the real-world application of our framework using both brain electroencephalogram and financial time series data. + oai:arXiv.org:2512.10467v1 + stat.ME + econ.EM + math.ST + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Bao Anh Vu, Andrew Zammit-Mangion, David Gunawan, Felicity S. McCormack, Noel Cressie + Bufan Li, Lujia Bai, Weichi Wu - Uniform-over-dimension location tests for multivariate and high-dimensional data - https://arxiv.org/abs/2512.09659 - arXiv:2512.09659v1 Announce Type: new -Abstract: Asymptotic methods for hypothesis testing in high-dimensional data usually require the dimension of the observations to increase to infinity, often with an additional relationship between the dimension (say, $p$) and the sample size (say, $n$). On the other hand, multivariate asymptotic testing methods are valid for fixed dimension only and their implementations typically require the sample size to be large compared to the dimension to yield desirable results. In practical scenarios, it is usually not possible to determine whether the dimension of the data conform to the conditions required for the validity of the high-dimensional asymptotic methods for hypothesis testing, or whether the sample size is large enough compared to the dimension of the data. In this work, we first describe the notion of uniform-over-$p$ convergences and subsequently, develop a uniform-over-dimension central limit theorem. An asymptotic test for the two-sample equality of locations is developed, which now holds uniformly over the dimension of the observations. Using simulated and real data, it is demonstrated that the proposed test exhibits better performance compared to several popular tests in the literature for high-dimensional data as well as the usual scaled two-sample tests for multivariate data, including the Hotelling's $T^2$ test for multivariate Gaussian data. - oai:arXiv.org:2512.09659v1 - stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + Adaptive almost full recovery in sparse nonparametric models + https://arxiv.org/abs/2512.10488 + arXiv:2512.10488v1 Announce Type: new +Abstract: We observe an unknown function of $d$ variables $f(\boldsymbol{t})$, $\boldsymbol{t} \in[0,1]^d$, in the Gaussian white noise model of intensity $\varepsilon>0$. We assume that the function $f$ is regular and that it is a sum of $k$-variate functions, where $k$ varies from $1$ to $s$ ($1\leq s\leq d$). These functions are unknown to us and only a few of them are nonzero. In this article, we address the problem of identifying the nonzero function components of $f$ almost fully in the case when $d=d_\varepsilon\to \infty$ as $\varepsilon\to 0$ and $s$ is either fixed or $s=s_\varepsilon\to \infty$, $s=o(d)$ as $\varepsilon\to 0$. This may be viewed as a variable selection problem. We derive the conditions when almost full variable selection in the model at hand is possible and provide a selection procedure that achieves this type of selection. The procedure is adaptive to the level of sparsity described by the sparsity index $\beta\in(0,1)$. We also derive conditions that make almost full variable selection in the model of our interest impossible. In view of these conditions, the proposed selector is seen to perform asymptotically optimal. The theoretical findings are illustrated numerically. + oai:arXiv.org:2512.10488v1 + math.ST + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Ritabrata Karmakar, Joydeep Chowdhury, Subhajit Dutta, Marc G. Genton + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Natalia Stepanova, Marie Turcicova, Xiang Zhao - A simple geometric proof for the characterisation of e-merging functions - https://arxiv.org/abs/2512.09708 - arXiv:2512.09708v1 Announce Type: new -Abstract: E-values offer a powerful framework for aggregating evidence across different (possibly dependent) statistical experiments. A fundamental question is to identify e-merging functions, namely mappings that merge several e-values into a single valid e-value. A simple and elegant characterisation of this function class was recently obtained by Wang(2025), though via technically involved arguments. This note gives a short and intuitive geometric proof of the same characterisation, based on a supporting hyperplane argument applied to concave envelopes. We also show that the result holds even without imposing monotonicity in the definition of e-merging functions, which was needed for the existing proof. This shows that any non-monotone merging rule is automatically dominated by a monotone one, and hence extending the definition beyond the monotone case brings no additional generality. - oai:arXiv.org:2512.09708v1 + Measures of inaccuracy based on Varextropy + https://arxiv.org/abs/2512.10502 + arXiv:2512.10502v1 Announce Type: new +Abstract: Recently, varextropy has been introduced as a new dispersion index and a measure of information. In this article, we derive the generating function of extropy and present its infinite series representation. Furthermore, we propose new variability measures: the inaccuracy and weighted inaccuracy measures between two random variables based on varextropy and we investigate their properties. We also obtain lower bounds for the inaccuracy measure and compare them with each other. In addition, we introduce a discrimination measure based on varextropy and employ it both for comparing probability distributions and for assessing the goodness of fit of distributions to data and we compare this measure with the dispersion index derived from the Kullback-Leibler divergence given in Balakrishnan et al. (2022). + oai:arXiv.org:2512.10502v1 math.ST stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Eugenio Clerico + Faranak Goodarzi, Somayeh Ghafouri - Bayesian Model Selection with an Application to Cosmology - https://arxiv.org/abs/2512.09724 - arXiv:2512.09724v1 Announce Type: new -Abstract: We investigate cosmological parameter inference and model selection from a Bayesian perspective. Type Ia supernova data from the Dark Energy Survey (DES-SN5YR) are used to test the \(\Lambda\)CDM, \(w\)CDM, and CPL cosmological models. Posterior inference is performed via Hamiltonian Monte Carlo using the No-U-Turn Sampler (NUTS) implemented in NumPyro and analyzed with ArviZ in Python. Bayesian model comparison is conducted through Bayes factors computed using the \texttt{bridgesampling} library in R. The results indicate that all three models demonstrate similar predictive performance, but \(w\)CDM shows stronger evidence relative to \(\Lambda\)CDM and CPL. We conclude that, under the assumptions and data used in this study, \(w\)CDM provides a better description of cosmological expansion. - oai:arXiv.org:2512.09724v1 - stat.AP - astro-ph.CO + A Bayesian Two-Sample Mean Test for High-Dimensional Data + https://arxiv.org/abs/2512.10537 + arXiv:2512.10537v1 Announce Type: new +Abstract: We propose a two-sample Bayesian mean test based on the Bayes factor with non-informative priors, specifically designed for scenarios where $p$ grows with $n$ with a linear rate $p/n \to c_1 \in (0, \infty)$. We establish the asymptotic normality of the test statistic and the asymptotic power. Through extensive simulations, we demonstrate that the proposed test performs competitively, particularly when the diagonal elements have heterogeneous variances and for small sample sizes. Furthermore, our test remains robust under distribution misspecification. The proposed method not only effectively detects both sparse and non-sparse differences in mean vectors but also maintains a well-controlled type I error rate, even in small-sample scenarios. We also demonstrate the performance of our proposed test using the \texttt{SRBCTs} dataset. + oai:arXiv.org:2512.10537v1 stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + stat.CO + Fri, 12 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Nikoloz Gigiberia + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Daojiang He, Suren Xu, Jing Zhou - Network Meta Analysis of Mean Survival - https://arxiv.org/abs/2512.09732 - arXiv:2512.09732v1 Announce Type: new -Abstract: Decisions based upon pairwise comparisons of multiple treatments are naturally performed in terms of the mean survival of the selected study arms or functions thereof. However, synthesis of treatment comparisons is usually performed on surrogates of the mean survival, such as hazard ratios or restricted mean survival times. Thus, network meta-analysis techniques may suffer from the limitations of these approaches, such as incorrect proportional hazards assumption or short-term follow-up periods. We propose a Bayesian framework for the network meta-analysis of the main outcome informing the decision, the mean survival of a treatment. Its derivation involves extrapolation of the observed survival curves. We use methods for stable extrapolation that integrate long term evidence based upon mortality projections. Extrapolations are performed using flexible poly-hazard parametric models and M-spline-based methods. We assess the computational and statistical efficiency of different techniques using a simulation study and apply the developed methods to two real data sets. The proposed method is formulated within a decision theoretic framework for cost-effectiveness analyses, where the `best' treatment is to be selected and incorporating the associated cost information is straightforward. - oai:arXiv.org:2512.09732v1 - stat.AP + Bootstrapping not under the null? + https://arxiv.org/abs/2512.10546 + arXiv:2512.10546v1 Announce Type: new +Abstract: We propose a bootstrap testing framework for a general class of hypothesis tests, which allows resampling under the null hypothesis as well as other forms of bootstrapping. We identify combinations of resampling schemes and bootstrap statistics for which the resulting tests are asymptotically exact and consistent against fixed alternatives. We show that in these cases the limiting local power functions are the same for the different resampling schemes. We also show that certain naive bootstrap schemes do not work. To demonstrate its versatility, we apply the framework to several examples: independence tests, tests on the coefficients in linear regression models, goodness-of-fit tests for general parametric models and for semi-parametric copula models. Simulation results confirm the asymptotic results and suggest that in smaller samples non-traditional bootstrap schemes may have advantages. This bootstrap-based hypothesis testing framework is implemented in the R package BootstrapTests. + oai:arXiv.org:2512.10546v1 + math.ST stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Anastasios Apsemidis, Dimitris Mavridis, Nikolaos Demiris + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Alexis Derumigny, Miltiadis Galanis, Wieger Schipper, Aad van der Vaart - A general class of continuous asymmetric distributions with positive support - https://arxiv.org/abs/2512.09787 - arXiv:2512.09787v1 Announce Type: new -Abstract: In order to better fit real-world datasets, studying asymmetric distribution is of great interest. In this work, we derive several mathematical properties of a general class of asymmetric distributions with positive support which shows up as a unified framework for Extreme Value Theory asymptotic results. The new model generalizes some well-known distribution models such as Generalized Gamma, Inverse Gamma, Weibull, Fr\'echet, Half-normal, Modified half-normal, Rayleigh, and Erlang. To highlight the applicability of our results, the performance of the analytical models is evaluated through real-life dataset modeling. - oai:arXiv.org:2512.09787v1 - math.ST - stat.AP - stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + Flexible Deep Neural Networks for Partially Linear Survival Data + https://arxiv.org/abs/2512.10570 + arXiv:2512.10570v1 Announce Type: new +Abstract: We propose a flexible deep neural network (DNN) framework for modeling survival data within a partially linear regression structure. The approach preserves interpretability through a parametric linear component for covariates of primary interest, while a nonparametric DNN component captures complex time-covariate interactions among nuisance variables. We refer to the method as FLEXI-Haz, a flexible hazard model with a partially linear structure. In contrast to existing DNN approaches for partially linear Cox models, FLEXI-Haz does not rely on the proportional hazards assumption. We establish theoretical guarantees: the neural network component attains minimax-optimal convergence rates based on composite Holder classes, and the linear estimator is root-n consistent, asymptotically normal, and semiparametrically efficient. Extensive simulations and real-data analyses demonstrate that FLEXI-Haz provides accurate estimation of the linear effect, offering a principled and interpretable alternative to modern methods based on proportional hazards. Code for implementing FLEXI-Haz, as well as scripts for reproducing data analyses and simulations, is available at: https://github.com/AsafBanana/FLEXI-Haz + oai:arXiv.org:2512.10570v1 + stat.ML + cs.LG + Fri, 12 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Felipe S. Quintino, Pushpa N. Rathie, Luan C. S. M. Ozelim, Tiago A. da Fonseca, Roberto Vila + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Asaf Ben Arie, Malka Gorfine - A Conversation with Mike West - https://arxiv.org/abs/2512.09790 - arXiv:2512.09790v1 Announce Type: new -Abstract: Mike West is currently the Arts & Sciences Distinguished Professor Emeritus of Statistics and Decision Sciences at Duke University. Mike's research in Bayesian analysis spans multiple interlinked areas: theory and methods of dynamic models in time series analysis, foundations of inference and decision analysis, multivariate and latent structure analysis, stochastic computation and optimisation, among others. Inter-disciplinary R&D has ranged across applications in commercial forecasting, dynamic networks, finance, econometrics, signal processing, climatology, systems biology, genomics and neuroscience, among other areas. Among Mike's currently active research areas are forecasting, causal prediction and decision analysis in business, economic policy and finance, as well as in personal decision making. Mike led the development of academic statistics at Duke University from 1990-2002, and has been broadly engaged in professional leadership elsewhere. He is past president of the International Society for Bayesian Analysis (ISBA), and has served in founding roles and as board member for several professional societies, national and international centres and institutes. Recipient of numerous awards, Mike has been active in research with various companies, banks, government agencies and academic centres, co-founder of a successful biotechnology company, and board member for several financial and IT companies. He has published 4 books, several edited volumes and over 200 papers. Mike has worked with many undergraduate and Master's research students, and as of 2025 has mentored around 65 primary PhD students and postdoctoral associates who moved to academic, industrial or governmental positions involving advanced statistical and data science research. - oai:arXiv.org:2512.09790v1 - stat.OT - Thu, 11 Dec 2025 00:00:00 -0500 + Lasso-Ridge Refitting: A Two-Stage Estimator for High-Dimensional Linear Regression + https://arxiv.org/abs/2512.10632 + arXiv:2512.10632v1 Announce Type: new +Abstract: The least absolute shrinkage and selection operator (Lasso) is a popular method for high-dimensional statistics. However, it is known that the Lasso often has estimation bias and prediction error. To address such disadvantages, many alternatives and refitting strategies have been proposed and studied. This work introduces a novel Lasso--Ridge method. Our analysis indicates that the proposed estimator achieves improved prediction performance in a range of settings, including cases where the Lasso is tuned at its theoretical optimal rate \(\sqrt{\log(p)/n}\). Moreover, the proposed method retains several key advantages of the Lasso, such as prediction consistency and reliable variable selection under mild conditions. Through extensive simulations, we further demonstrate that our estimator outperforms the Lasso in both prediction and estimation accuracy, highlighting its potential as a powerful tool for high-dimensional linear regression. + oai:arXiv.org:2512.10632v1 + stat.ME + Fri, 12 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Guo Liu (Waseda University) + + + Revisiting the apparent discrepancy between the frequentist and Bayesian interpretation of an adaptive design + https://arxiv.org/abs/2512.10697 + arXiv:2512.10697v1 Announce Type: new +Abstract: It is generally appreciated that a frequentist analysis of a group sequential trial must in order to avoid inflating type I error account for the fact that one or more interim analyses were performed. It is also to a lesser extent realised that it may be necessary to account for the ensuing estimation bias. A group sequential design is an instance of adaptive clinical trials where a study may change its design dynamically as a reaction to the observed data. There is a widespread perception that one may circumvent the statistical issues associated with the analysis of an adaptive clinical trial by performing the analysis under a Bayesian paradigm. The root of the argument is that the Bayesian posterior is perceived as unaltered by the data-driven adaptations. We examine this claim by analysing a simple trial with a single interim analysis. We approach the interpretation of the trial data under both a frequentist and Bayesian paradigm with a focus on estimation. The conventional result is that the interim analysis impacts the estimation procedure under the frequentist paradigm, but not under the Bayesian paradigm, which may be seen as expressing a "paradox" between the two paradigms. We argue that this result however relies heavily on what one would define as the universe of relevant trials defined by first samples of the parameters from a prior distribution and then the data from a sampling model given the parameters. In particular, in this set of trials, whether a connection exists between the parameter of interest and design parameters. We show how an alternative interpretation of the trial yields a Bayesian posterior mean that corrects for the interim analysis with a term that closely resembles the frequentist conditional bias. We conclude that the role of auxiliary trial parameters needs to be carefully considered when constructing a prior in an adaptive design. + oai:arXiv.org:2512.10697v1 + stat.ME + Fri, 12 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hedibert F. Lopes, Filippo Ascolani + Simon Bang Kristensen, Erik Thorlund Parner - RECAP Framework v1.0: A Multi-Layer Inheritance Architecture for Evidence Synthesis - https://arxiv.org/abs/2512.09821 - arXiv:2512.09821v1 Announce Type: new -Abstract: Evidence synthesis has advanced through improved reporting standards, bias assessment tools, and analytic methods, but current workflows remain limited by a single-layer structure in which conceptual, methodological, and procedural decisions are made on the same level. This forces each project to rebuild its methodological foundations from scratch, leading to inconsistencies, conceptual drift, and unstable reasoning across projects. RECAP Framework v1.0 introduces a three-layer meta-architecture consisting of methodological laws (Grandparent), domain-level abstractions (Parent), and project-level implementations (Child). The framework defines an inheritance system with strict rules for tiering, routing, and contamination control to preserve construct clarity, enforce inferential discipline, and support reproducibility across multi-project evidence ecosystems. RECAP provides a formal governance layer for evidence synthesis and establishes the foundation for a methodological lineage designed to stabilize reasoning across research programs. - oai:arXiv.org:2512.09821v1 + Dynamic sparse graphs with overlapping communities + https://arxiv.org/abs/2512.10717 + arXiv:2512.10717v1 Announce Type: new +Abstract: Dynamic community detection in networks addresses the challenge of tracking how groups of interconnected nodes evolve, merge, and dissolve within time-evolving networks. Here, we propose a novel statistical framework for sparse networks with power-law degree distribution and dynamic overlapping community structure. Using a Bayesian Nonparametric framework, we build on the idea to represent the graph as an exchangeable point process on the plane. We base the model construction on vectors of completely random measures and a latent Markov process for the time-evolving node affiliations. This construction provides a flexible and interpretable approach to model dynamic communities, naturally generalizing existing overlapping block models to the sparse and scale-free regimes. We provide the asymptotic properties of the model concerning sparsity and power-law behavior and propose inference through an approximate procedure which we validate empirically. We show how the model can uncover interpretable community trajectories in a real-world network. + oai:arXiv.org:2512.10717v1 stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Hung Kuan Lee + Antreas Laos, Xenia Miscouridou, Francesca Panero - Predictor-Informed Bayesian Nonparametric Clustering - https://arxiv.org/abs/2512.09826 - arXiv:2512.09826v1 Announce Type: new -Abstract: In this project we are interested in performing clustering of observations such that the cluster membership is influenced by a set of predictors. To that end, we employ the Bayesian nonparameteric Common Atoms Model, which is a nested clustering algorithm that utilizes a (fixed) group membership for each observation to encourage more similar clustering of members of the same group. CAM operates by assuming each group has its own vector of cluster probabilities, which are themselves clustered to allow similar clustering for some groups. We extend this approach by treating the group membership as an unknown latent variable determined as a flexible nonparametric form of the covariate vector. Consequently, observations with similar predictor values will be in the same latent group and are more likely to be clustered together than observations with disparate predictors. We propose a pyramid group model that flexibly partitions the predictor space into these latent group memberships. This pyramid model operates similarly to a Bayesian regression tree process except that it uses the same splitting rule for at all nodes at the same tree depth which facilitates improved mixing. We outline a block Gibbs sampler to perform posterior inference from our model. Our methodology is demonstrated in simulation and real data examples. In the real data application, we utilize the RAND Health and Retirement Study to cluster and predict patient outcomes in terms of the number of overnight hospital stays. - oai:arXiv.org:2512.09826v1 + Identifiable factor analysis for mixed continuous and binary variables based on the Gaussian-Grassmann distribution + https://arxiv.org/abs/2512.10804 + arXiv:2512.10804v1 Announce Type: new +Abstract: We develop a factor analysis for mixed continuous and binary observed variables. To this end, we utilized a recently developed multivariate probability distribution for mixed-type random variables, the Gaussian-Grassmann distribution. In the proposed factor analysis, marginalization over latent variables can be performed analytically, yielding an analytical expression for the distribution of the observed variables. This analytical tractability allows model parameters to be estimated using standard gradient-based optimization techniques. We also address improper solutions associated with maximum likelihood factor analysis. We propose a prescription to avoid improper solutions by imposing a constraint that row vectors of the factor loading matrix have the same norm for all features. Then, we prove that the proposed factor analysis is identifiable under the norm constraint. We demonstrate the validity of this norm constraint prescription and numerically verified the model's identifiability using both real and synthetic datasets. We also compare the proposed model with quantification method and found that the proposed model achieves better reproducibility of correlations than the quantification method. + oai:arXiv.org:2512.10804v1 stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + physics.data-an + Fri, 12 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Md Yasin Ali Parh, Jeremy T. Gaskins + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Takashi Arai - Supervised learning pays attention - https://arxiv.org/abs/2512.09912 - arXiv:2512.09912v1 Announce Type: new -Abstract: In-context learning with attention enables large neural networks to make context-specific predictions by selectively focusing on relevant examples. Here, we adapt this idea to supervised learning procedures such as lasso regression and gradient boosting, for tabular data. Our goals are to (1) flexibly fit personalized models for each prediction point and (2) retain model simplicity and interpretability. - Our method fits a local model for each test observation by weighting the training data according to attention, a supervised similarity measure that emphasizes features and interactions that are predictive of the outcome. Attention weighting allows the method to adapt to heterogeneous data in a data-driven way, without requiring cluster or similarity pre-specification. Further, our approach is uniquely interpretable: for each test observation, we identify which features are most predictive and which training observations are most relevant. We then show how to use attention weighting for time series and spatial data, and we present a method for adapting pretrained tree-based models to distributional shift using attention-weighted residual corrections. Across real and simulated datasets, attention weighting improves predictive performance while preserving interpretability, and theory shows that attention-weighting linear models attain lower mean squared error than the standard linear model under mixture-of-models data-generating processes with known subgroup structure. - oai:arXiv.org:2512.09912v1 - stat.ML - cs.AI + An Elementary Proof of the Near Optimality of LogSumExp Smoothing + https://arxiv.org/abs/2512.10825 + arXiv:2512.10825v1 Announce Type: new +Abstract: We consider the design of smoothings of the (coordinate-wise) max function in $\mathbb{R}^d$ in the infinity norm. The LogSumExp function $f(x)=\ln(\sum^d_i\exp(x_i))$ provides a classical smoothing, differing from the max function in value by at most $\ln(d)$. We provide an elementary construction of a lower bound, establishing that every overestimating smoothing of the max function must differ by at least $\sim 0.8145\ln(d)$. Hence, LogSumExp is optimal up to constant factors. However, in small dimensions, we provide stronger, exactly optimal smoothings attaining our lower bound, showing that the entropy-based LogSumExp approach to smoothing is not exactly optimal. + oai:arXiv.org:2512.10825v1 + math.ST cs.LG - Thu, 11 Dec 2025 00:00:00 -0500 + math.OC + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Erin Craig, Robert Tibshirani + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Thabo Samakhoana, Benjamin Grimmer - Optimizing Algorithms for Mobile Health Interventions with Active Querying Optimization - https://arxiv.org/abs/2512.08950 - arXiv:2512.08950v1 Announce Type: cross -Abstract: Reinforcement learning in mobile health (mHealth) interventions requires balancing intervention efficacy with user burden, particularly when state measurements (for example, user surveys or feedback) are costly yet essential. The Act-Then-Measure (ATM) heuristic addresses this challenge by decoupling control and measurement actions within the Action-Contingent Noiselessly Observable Markov Decision Process (ACNO-MDP) framework. However, the standard ATM algorithm relies on a temporal-difference-inspired Q-learning method, which is prone to instability in sparse and noisy environments. In this work, we propose a Bayesian extension to ATM that replaces standard Q-learning with a Kalman filter-style Bayesian update, maintaining uncertainty-aware estimates of Q-values and enabling more stable and sample-efficient learning. We evaluate our method in both toy environments and clinically motivated testbeds. In small, tabular environments, Bayesian ATM achieves comparable or improved scalarized returns with substantially lower variance and more stable policy behavior. In contrast, in larger and more complex mHealth settings, both the standard and Bayesian ATM variants perform poorly, suggesting a mismatch between ATM's modeling assumptions and the structural challenges of real-world mHealth domains. These findings highlight the value of uncertainty-aware methods in low-data settings while underscoring the need for new RL algorithms that explicitly model causal structure, continuous states, and delayed feedback under observation cost constraints. - oai:arXiv.org:2512.08950v1 - cs.LG - stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Aseel Rawashdeh + Measures and Models of Non-Monotonic Dependence + https://arxiv.org/abs/2512.10828 + arXiv:2512.10828v1 Announce Type: new +Abstract: A margin-free measure of bivariate association generalizing Spearman's rho to the case of non-monotonic dependence is defined in terms of two square integrable functions on the unit interval. Properties of generalized Spearman correlation are investigated when the functions are piecewise continuous and strictly monotonic, with particular focus on the special cases where the functions are drawn from orthonormal bases defined by Legendre polynomials and cosine functions. For continuous random variables, generalized Spearman correlation is treated as a copula-based measure and shown to depend on a pair of uniform-distribution-preserving (udp) transformations determined by the underlying functions. Bounds for generalized Spearman correlation are derived and a novel technique referred to as stochastic inversion of udp transformations is used to construct singular copulas that attain the bounds and parametric copulas with densities that interpolate between the bounds and model different degrees of non-monotonic dependence. Sample analogues of generalized Spearman correlation are proposed and their asymptotic and small-sample properties are investigated. Potential applications of the theory are demonstrated including: exploratory analyses of the dependence structures of datasets and their symmetries; elicitation of functions maximizing generalized Spearman correlation via expansions in orthonormal basis functions; and construction of tractable probability densities to model a wide variety of non-monotonic dependencies. + oai:arXiv.org:2512.10828v1 + stat.ME + math.ST + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Alexander J. McNeil, Johanna G. Neslehova, Andrew D. Smith - DW-KNN: A Transparent Local Classifier Integrating Distance Consistency and Neighbor Reliability - https://arxiv.org/abs/2512.08956 - arXiv:2512.08956v1 Announce Type: cross -Abstract: K-Nearest Neighbors (KNN) is one of the most used ML classifiers. However, if we observe closely, standard distance-weighted KNN and relative variants assume all 'k' neighbors are equally reliable. In heterogeneous feature space, this becomes a limitation that hinders reliability in predicting true levels of the observation. - We propose DW-KNN (Double Weighted KNN), a transparent and robust variant that integrates exponential distance with neighbor validity. This enables instance-level interpretability, suppresses noisy or mislabeled samples, and reduces hyperparameter sensitivity. - Comprehensive evaluation on 9 data-sets helps to demonstrate that DW-KNN achieves 0.8988 accuracy on average. It ranks 2nd among six methods and within 0.2% of the best-performing Ensemble KNN. It also exhibits the lowest cross-validation variance (0.0156), indicating reliable prediction stability. Statistical significance test confirmed ($p < 0.001$) improvement over compactness weighted KNN (+4.09\%) and Kernel weighted KNN (+1.13\%). The method provides a simple yet effective alternative to complex adaptive schemes, particularly valuable for high-stakes applications requiring explainable predictions. - oai:arXiv.org:2512.08956v1 - cs.LG + Physics-informed Polynomial Chaos Expansion with Enhanced Constrained Optimization Solver and D-optimal Sampling + https://arxiv.org/abs/2512.10873 + arXiv:2512.10873v1 Announce Type: new +Abstract: Physics-informed polynomial chaos expansions (PC$^2$) provide an efficient physically constrained surrogate modeling framework by embedding governing equations and other physical constraints into the standard data-driven polynomial chaos expansions (PCE) and solving via the Karush-Kuhn-Tucker (KKT) conditions. This approach improves the physical interpretability of surrogate models while achieving high computational efficiency and accuracy. However, the performance and efficiency of PC$^2$ can still be degraded with high-dimensional parameter spaces, limited data availability, or unrepresentative training data. To address this problem, this study explores two complementary enhancements to the PC$^2$ framework. First, a numerically efficient constrained optimization solver, straightforward updating of Lagrange multipliers (SULM), is adopted as an alternative to the conventional KKT solver. The SULM method significantly reduces computational cost when solving physically constrained problems with high-dimensionality and derivative boundary conditions that require a large number of virtual points. Second, a D-optimal sampling strategy is utilized to select informative virtual points to improve the stability and achieve the balance of accuracy and efficiency of the PC$^2$. The proposed methods are integrated into the PC$^2$ framework and evaluated through numerical examples of representative physical systems governed by ordinary or partial differential equations. The results demonstrate that the enhanced PC$^2$ has better comprehensive capability than standard PC$^2$, and is well-suited for high-dimensional uncertainty quantification tasks. + oai:arXiv.org:2512.10873v1 stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 - cross + cs.LG + Fri, 12 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Kumarjit Pathak, Karthik K, Sachin Madan, Jitin Kapila + Qitian Lu, Himanshu Sharma, Michael D. Shields, Luk\'a\v{s} Nov\'ak - BISTRO - A Bi-Fidelity Stochastic Gradient Framework using Trust-Regions for Optimization Under Uncertainty - https://arxiv.org/abs/2512.09055 - arXiv:2512.09055v1 Announce Type: cross -Abstract: Stochastic optimization of engineering systems is often infeasible due to repeated evaluations of a computationally expensive, high-fidelity simulation. Bi-fidelity methods mitigate this challenge by leveraging a cheaper, approximate model to accelerate convergence. Most existing bi-fidelity approaches, however, exploit either design-space curvature or random-space correlation, not both. We present BISTRO - a BI-fidelity Stochastic Trust-Region Optimizer for unconstrained optimization under uncertainty through a stochastic approximation procedure. This approach exploits the curvature information of a low-fidelity objective function to converge within a basin of a local minimum of the high-fidelity model where low-fidelity curvature information is no longer valuable. The method then switches to a variance-reduced stochastic gradient descent procedure. We provide convergence guarantees in expectation under certain regularity assumptions and ensure the best-case $\mathcal{O}(1/n)$ convergence rate for stochastic optimization. On benchmark problems and a 20-dimensional space shuttle reentry case, BISTRO converges faster than adaptive sampling and variance reduction procedures and cuts computational expense by up to 29x. - oai:arXiv.org:2512.09055v1 - math.OC - stat.CO - Thu, 11 Dec 2025 00:00:00 -0500 + Beyond prewhitening: detection of gravity modes and their period spacings in slowly pulsating B stars using the multitaper F-test + https://arxiv.org/abs/2512.10019 + arXiv:2512.10019v1 Announce Type: cross +Abstract: Gravity modes in main-sequence stars have traditionally been studied using a prewhitening approach, which iteratively identifies modes in the Fourier domain and subsequently tunes their frequencies, amplitudes, and phases through time-domain regression. While effective, this method becomes inefficient when analysing large volumes of long time-series data and often relies on subjective stopping criteria to determine the number of iterations. We aim to perform frequency extraction of gravity modes in slowly pulsating B (SPB) stars using a statistically robust, data-driven approach based on advanced power spectrum and harmonic analysis techniques. Our approach employs the multitaper non-uniform fast Fourier transform, mtNUFFT, a power spectrum estimator that addresses several statistical limitations of traditional methods such as the Lomb-Scargle periodogram. We apply its extension, the multitaper F-test, to extract coherent gravity modes from 4-year Kepler light curves of SPB stars and to search for period spacing patterns among the extracted modes. The multitaper F-test enables fast and accurate extraction of the properties of gravity modes with quasi-infinite lifetimes, preferentially selecting modes that exhibit purely periodic behaviour. Although the method typically extracts fewer frequencies than conventional prewhitening, it recovers most known modes and, in some cases, reveals new ones. We also find evidence for gravity modes with long but finite lifetimes, and detect more than one period spacing pattern in some of the studied SPB stars. Overall, the multitaper F-test offers a more objective and statistically sound alternative to prewhitening. It scales efficiently to large datasets containing thousands of pulsators, and has the potential to facilitate mode identification and to distinguish between the different excitation mechanisms operating in SPB stars. + oai:arXiv.org:2512.10019v1 + astro-ph.SR + astro-ph.IM + stat.AP + Fri, 12 Dec 2025 00:00:00 -0500 cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Thomas O. Dixon, Geoffrey F. Bomarito, James E. Warner, Alex A. Gorodetsky + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Aarya A. Patil, Conny Aerts, Nikki Y. N. Wang, Jordan Van Beeck, May G. Pedersen - Banach neural operator for Navier-Stokes equations - https://arxiv.org/abs/2512.09070 - arXiv:2512.09070v1 Announce Type: cross -Abstract: Classical neural networks are known for their ability to approximate mappings between finite-dimensional spaces, but they fall short in capturing complex operator dynamics across infinite-dimensional function spaces. Neural operators, in contrast, have emerged as powerful tools in scientific machine learning for learning such mappings. However, standard neural operators typically lack mechanisms for mixing or attending to input information across space and time. In this work, we introduce the Banach neural operator (BNO) -- a novel framework that integrates Koopman operator theory with deep neural networks to predict nonlinear, spatiotemporal dynamics from partial observations. The BNO approximates a nonlinear operator between Banach spaces by combining spectral linearization (via Koopman theory) with deep feature learning (via convolutional neural networks and nonlinear activations). This sequence-to-sequence model captures dominant dynamic modes and allows for mesh-independent prediction. Numerical experiments on the Navier-Stokes equations demonstrate the method's accuracy and generalization capabilities. In particular, BNO achieves robust zero-shot super-resolution in unsteady flow prediction and consistently outperforms conventional Koopman-based methods and deep learning models. - oai:arXiv.org:2512.09070v1 - cs.NE + Cluster-Dags as Powerful Background Knowledge For Causal Discovery + https://arxiv.org/abs/2512.10032 + arXiv:2512.10032v1 Announce Type: cross +Abstract: Finding cause-effect relationships is of key importance in science. Causal discovery aims to recover a graph from data that succinctly describes these cause-effect relationships. However, current methods face several challenges, especially when dealing with high-dimensional data and complex dependencies. Incorporating prior knowledge about the system can aid causal discovery. In this work, we leverage Cluster-DAGs as a prior knowledge framework to warm-start causal discovery. We show that Cluster-DAGs offer greater flexibility than existing approaches based on tiered background knowledge and introduce two modified constraint-based algorithms, Cluster-PC and Cluster-FCI, for causal discovery in the fully and partially observed setting, respectively. Empirical evaluation on simulated data demonstrates that Cluster-PC and Cluster-FCI outperform their respective baselines without prior knowledge. + oai:arXiv.org:2512.10032v1 cs.LG + cs.AI stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1063/5.0284818 - Bo Zhang + http://creativecommons.org/licenses/by/4.0/ + Jan Marco Ruiz de Vargas, Kirtan Padh, Niki Kilbertus - Beyond the Hype: Comparing Lightweight and Deep Learning Models for Air Quality Forecasting - https://arxiv.org/abs/2512.09076 - arXiv:2512.09076v1 Announce Type: cross -Abstract: Accurate forecasting of urban air pollution is essential for protecting public health and guiding mitigation policies. While Deep Learning (DL) and hybrid pipelines dominate recent research, their complexity and limited interpretability hinder operational use. This study investigates whether lightweight additive models -- Facebook Prophet (FBP) and NeuralProphet (NP) -- can deliver competitive forecasts for particulate matter (PM$_{2.5}$, PM$_{10}$) in Beijing, China. Using multi-year pollutant and meteorological data, we applied systematic feature selection (correlation, mutual information, mRMR), leakage-safe scaling, and chronological data splits. Both models were trained with pollutant and precursor regressors, with NP additionally leveraging lagged dependencies. For context, two machine learning baselines (LSTM, LightGBM) and one traditional statistical model (SARIMAX) were also implemented. Performance was evaluated on a 7-day holdout using MAE, RMSE, and $R^2$. Results show that FBP consistently outperformed NP, SARIMAX, and the learning-based baselines, achieving test $R^2$ above 0.94 for both pollutants. These findings demonstrate that interpretable additive models remain competitive with both traditional and complex approaches, offering a practical balance of accuracy, transparency, and ease of deployment. - oai:arXiv.org:2512.09076v1 + Intelligently Weighting Multiple Reference Models for Direct Preference Optimization of LLMs + https://arxiv.org/abs/2512.10040 + arXiv:2512.10040v1 Announce Type: cross +Abstract: Fine-tuning is integral for aligning large language models (LLMs) with human preferences. Multiple-Reference Preference Optimization (MRPO) builds on Direct Preference Optimization (DPO) by fine-tuning LLMs on preference datasets while regularizing the policy towards a mixture of reference models to leverage their collective desirable properties. However, current methods for setting the reference weights are ad-hoc and statistically unsound, leading to unreliable performance. To address this, we introduce four new weighting strategies: two offline methods that leverage held-out validation signal; one online method that uses a sliding-window estimator to reduce overfitting; and an online method that treats reference weighting as a $K$-armed bandit via Thompson Sampling. Experiments using Qwen2.5-0.5B as the policy model and seven reference models from the Llama, Mistral, Qwen, Yi, and Phi families (0.5B-14B each) show that all 4 of our strategies outperform the current MRPO weighting methods on UltraFeedback and SafeRLHF in preference accuracy. More thought-provokingly, however, we find that single-reference DPO, using any of 6 out of 7 references, consistently outperforms all tested multiple-reference approaches -- calling into question the practical appeal of multiple-reference approaches. + oai:arXiv.org:2512.10040v1 cs.LG cs.AI stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Moazzam Umer Gondal, Hamad ul Qudous, Asma Ahmad Farhan - - - Causal Attribution of Model Performance Gaps in Medical Imaging Under Distribution Shifts - https://arxiv.org/abs/2512.09094 - arXiv:2512.09094v1 Announce Type: cross -Abstract: Deep learning models for medical image segmentation suffer significant performance drops due to distribution shifts, but the causal mechanisms behind these drops remain poorly understood. We extend causal attribution frameworks to high-dimensional segmentation tasks, quantifying how acquisition protocols and annotation variability independently contribute to performance degradation. We model the data-generating process through a causal graph and employ Shapley values to fairly attribute performance changes to individual mechanisms. Our framework addresses unique challenges in medical imaging: high-dimensional outputs, limited samples, and complex mechanism interactions. Validation on multiple sclerosis (MS) lesion segmentation across 4 centers and 7 annotators reveals context-dependent failure modes: annotation protocol shifts dominate when crossing annotators (7.4% $\pm$ 8.9% DSC attribution), while acquisition shifts dominate when crossing imaging centers (6.5% $\pm$ 9.1%). This mechanism-specific quantification enables practitioners to prioritize targeted interventions based on deployment context. - oai:arXiv.org:2512.09094v1 - eess.IV - cs.CV - cs.LG - stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 cross http://creativecommons.org/licenses/by/4.0/ - Pedro M. Gordaliza, Nataliia Molchanova, Jaume Banus, Thomas Sanchez, Meritxell Bach Cuadra + Skyler Wu, Aymen Echarghaoui - Exploratory Mean-Variance with Jumps: An Equilibrium Approach - https://arxiv.org/abs/2512.09224 - arXiv:2512.09224v1 Announce Type: cross -Abstract: Revisiting the continuous-time Mean-Variance (MV) Portfolio Optimization problem, we model the market dynamics with a jump-diffusion process and apply Reinforcement Learning (RL) techniques to facilitate informed exploration within the control space. We recognize the time-inconsistency of the MV problem and adopt the time-inconsistent control (TIC) approach to analytically solve for an exploratory equilibrium investment policy, which is a Gaussian distribution centered on the equilibrium control of the classical MV problem. Our approach accounts for time-inconsistent preferences and actions, and our equilibrium policy is the best option an investor can take at any given time during the investment period. Moreover, we leverage the martingale properties of the equilibrium policy, design a RL model, and propose an Actor-Critic RL algorithm. All of our RL model parameters converge to the corresponding true values in a simulation study. Our numerical study on 24 years of real market data shows that the proposed RL model is profitable in 13 out of 14 tests, demonstrating its practical applicability in real world investment. - oai:arXiv.org:2512.09224v1 - q-fin.PM + Defining the Scope of Learning Analytics: An Axiomatic Approach for Analytic Practice and Measurable Learning Phenomena + https://arxiv.org/abs/2512.10081 + arXiv:2512.10081v1 Announce Type: cross +Abstract: Learning Analytics (LA) has rapidly expanded through practical and technological innovation, yet its foundational identity has remained theoretically under-specified. This paper addresses this gap by proposing the first axiomatic theory that formally defines the essential structure, scope, and limitations of LA. Derived from the psychological definition of learning and the methodological requirements of LA, the framework consists of five axioms specifying discrete observation, experience construction, state transition, and inference. From these axioms, we derive a set of theorems and propositions that clarify the epistemological stance of LA, including the inherent unobservability of learner states, the irreducibility of temporal order, constraints on reachable states, and the impossibility of deterministically predicting future learning. We further define LA structure and LA practice as formal objects, demonstrating the sufficiency and necessity of the axioms and showing that diverse LA approaches -- such as Bayesian Knowledge Tracing and dashboards -- can be uniformly explained within this framework. The theory provides guiding principles for designing analytic methods and interpreting learning data while avoiding naive behaviorism and category errors by establishing an explicit theoretical inference layer between observations and states. This work positions LA as a rigorous science of state transition systems based on observability, establishing the theoretical foundation necessary for the field's maturation as a scholarly discipline. + oai:arXiv.org:2512.10081v1 + cs.CY + cs.AI stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 cross http://creativecommons.org/licenses/by/4.0/ - Yuling Max Chen, Bin Li, David Saunders - - - Debiased Bayesian Inference for High-dimensional Regression Models - https://arxiv.org/abs/2512.09257 - arXiv:2512.09257v1 Announce Type: cross -Abstract: There has been significant progress in Bayesian inference based on sparsity-inducing (e.g., spike-and-slab and horseshoe-type) priors for high-dimensional regression models. The resulting posteriors, however, in general do not possess desirable frequentist properties, and the credible sets thus cannot serve as valid confidence sets even asymptotically. We introduce a novel debiasing approach that corrects the bias for the entire Bayesian posterior distribution. We establish a new Bernstein-von Mises theorem that guarantees the frequentist validity of the debiased posterior. We demonstrate the practical performance of our proposal through Monte Carlo simulations and two empirical applications in economics. - oai:arXiv.org:2512.09257v1 - econ.EM - math.ST - stat.CO - stat.ME - stat.ML - stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qihui Chen, Zheng Fang, Ruixuan Liu + Kensuke Takii, Changhao Liang, Hiroaki Ogata - On asymptotic behavior of solutions to random fractional Riesz-Bessel equations with cyclic long memory initial conditions - https://arxiv.org/abs/2512.09308 - arXiv:2512.09308v1 Announce Type: cross -Abstract: This paper investigates fractional Riesz-Bessel equations with random initial conditions. The spectra of these random initial conditions exhibit singularities both at zero frequency and at non-zero frequencies, which correspond to the cases of classical long-range dependence and cyclic long-range dependence, respectively. Using spectral methods and asymptotic theory, it is shown that the rescaled solutions of the equations converge to spatio-temporal Gaussian random fields. The limit fields are stationary in space and non-stationary in time. The covariance and spectral structures of the resulting asymptotic random fields are provided. The paper further establishes multiscaling limit theorems for the case of regularly varying asymptotics. A numerical example illustrating the theoretical results is also presented. - oai:arXiv.org:2512.09308v1 - math.PR + Partitioning the Sample Space for a More Precise Shannon Entropy Estimation + https://arxiv.org/abs/2512.10133 + arXiv:2512.10133v1 Announce Type: cross +Abstract: Reliable data-driven estimation of Shannon entropy from small data sets, where the number of examples is potentially smaller than the number of possible outcomes, is a critical matter in several applications. In this paper, we introduce a discrete entropy estimator, where we use the decomposability property in combination with estimations of the missing mass and the number of unseen outcomes to compensate for the negative bias induced by them. Experimental results show that the proposed method outperforms some classical estimators in undersampled regimes, and performs comparably with some well-established state-of-the-art estimators. + oai:arXiv.org:2512.10133v1 + cs.LG math.ST stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Maha Mosaad A. Alghamdi, Andriy Olenko + http://creativecommons.org/licenses/by/4.0/ + Gabriel F. A. Bastos, Jugurta Montalv\~ao - Self-Supervised Learning with Gaussian Processes - https://arxiv.org/abs/2512.09322 - arXiv:2512.09322v1 Announce Type: cross -Abstract: Self supervised learning (SSL) is a machine learning paradigm where models learn to understand the underlying structure of data without explicit supervision from labeled samples. The acquired representations from SSL have demonstrated useful for many downstream tasks including clustering, and linear classification, etc. To ensure smoothness of the representation space, most SSL methods rely on the ability to generate pairs of observations that are similar to a given instance. However, generating these pairs may be challenging for many types of data. Moreover, these methods lack consideration of uncertainty quantification and can perform poorly in out-of-sample prediction settings. To address these limitations, we propose Gaussian process self supervised learning (GPSSL), a novel approach that utilizes Gaussian processes (GP) models on representation learning. GP priors are imposed on the representations, and we obtain a generalized Bayesian posterior minimizing a loss function that encourages informative representations. The covariance function inherent in GPs naturally pulls representations of similar units together, serving as an alternative to using explicitly defined positive samples. We show that GPSSL is closely related to both kernel PCA and VICReg, a popular neural network-based SSL method, but unlike both allows for posterior uncertainties that can be propagated to downstream tasks. Experiments on various datasets, considering classification and regression tasks, demonstrate that GPSSL outperforms traditional methods in terms of accuracy, uncertainty quantification, and error control. - oai:arXiv.org:2512.09322v1 + Inference for Batched Adaptive Experiments + https://arxiv.org/abs/2512.10156 + arXiv:2512.10156v1 Announce Type: cross +Abstract: The advantages of adaptive experiments have led to their rapid adoption in economics, other fields, as well as among practitioners. However, adaptive experiments pose challenges for causal inference. This note suggests a BOLS (batched ordinary least squares) test statistic for inference of treatment effects in adaptive experiments. The statistic provides a precision-equalizing aggregation of per-period treatment-control differences under heteroskedasticity. The combined test statistic is a normalized average of heteroskedastic per-period z-statistics and can be used to construct asymptotically valid confidence intervals. We provide simulation results comparing rejection rates in the typical case with few treatment periods and few (or many) observations per batch. + oai:arXiv.org:2512.10156v1 + econ.EM cs.LG stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Yunshan Duan, Sinead Williamson + http://creativecommons.org/licenses/by/4.0/ + Jan Kemper, Davud Rostam-Afschar - CFLight: Enhancing Safety with Traffic Signal Control through Counterfactual Learning - https://arxiv.org/abs/2512.09368 - arXiv:2512.09368v1 Announce Type: cross -Abstract: Traffic accidents result in millions of injuries and fatalities globally, with a significant number occurring at intersections each year. Traffic Signal Control (TSC) is an effective strategy for enhancing safety at these urban junctures. Despite the growing popularity of Reinforcement Learning (RL) methods in optimizing TSC, these methods often prioritize driving efficiency over safety, thus failing to address the critical balance between these two aspects. Additionally, these methods usually need more interpretability. CounterFactual (CF) learning is a promising approach for various causal analysis fields. In this study, we introduce a novel framework to improve RL for safety aspects in TSC. This framework introduces a novel method based on CF learning to address the question: ``What if, when an unsafe event occurs, we backtrack to perform alternative actions, and will this unsafe event still occur in the subsequent period?'' To answer this question, we propose a new structure causal model to predict the result after executing different actions, and we propose a new CF module that integrates with additional ``X'' modules to promote safe RL practices. Our new algorithm, CFLight, which is derived from this framework, effectively tackles challenging safety events and significantly improves safety at intersections through a near-zero collision control strategy. Through extensive numerical experiments on both real-world and synthetic datasets, we demonstrate that CFLight reduces collisions and improves overall traffic performance compared to conventional RL methods and the recent safe RL model. Moreover, our method represents a generalized and safe framework for RL methods, opening possibilities for applications in other domains. The data and code are available in the github https://github.com/MJLee00/CFLight-Enhancing-Safety-with-Traffic-Signal-Control-through-Counterfactual-Learning. - oai:arXiv.org:2512.09368v1 - cs.LG + Topology Identification and Inference over Graphs + https://arxiv.org/abs/2512.10183 + arXiv:2512.10183v1 Announce Type: cross +Abstract: Topology identification and inference of processes evolving over graphs arise in timely applications involving brain, transportation, financial, power, as well as social and information networks. This chapter provides an overview of graph topology identification and statistical inference methods for multidimensional relational data. Approaches for undirected links connecting graph nodes are outlined, going all the way from correlation metrics to covariance selection, and revealing ties with smooth signal priors. To account for directional (possibly causal) relations among nodal variables and address the limitations of linear time-invariant models in handling dynamic as well as nonlinear dependencies, a principled framework is surveyed to capture these complexities through judiciously selected kernels from a prescribed dictionary. Generalizations are also described via structural equations and vector autoregressions that can exploit attributes such as low rank, sparsity, acyclicity, and smoothness to model dynamic processes over possibly time-evolving topologies. It is argued that this approach supports both batch and online learning algorithms with convergence rate guarantees, is amenable to tensor (that is, multi-way array) formulations as well as decompositions that are well-suited for multidimensional network data, and can seamlessly leverage high-order statistical information. + oai:arXiv.org:2512.10183v1 + eess.SP + cs.SI stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + stat.ML + Fri, 12 Dec 2025 00:00:00 -0500 cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Mingyuan Li, Chunyu Liu, Zhuojun Li, Xiao Liu, Guangsheng Yu, Bo Du, Jun Shen, Qiang Wu + http://creativecommons.org/licenses/by/4.0/ + Gonzalo Mateos, Yanning Shen, Georgios B. Giannakis, Ananthram Swami - Drawback of Enforcing Equivariance and its Compensation via the Lens of Expressive Power - https://arxiv.org/abs/2512.09673 - arXiv:2512.09673v1 Announce Type: cross -Abstract: Equivariant neural networks encode symmetry as an inductive bias and have achieved strong empirical performance in wide domains. However, their expressive power remains not well understood. Focusing on 2-layer ReLU networks, this paper investigates the impact of equivariance constraints on the expressivity of equivariant and layer-wise equivariant networks. By examining the boundary hyperplanes and the channel vectors of ReLU networks, we construct an example showing that equivariance constraints could strictly limit expressive power. However, we demonstrate that this drawback can be compensated via enlarging the model size. Furthermore, we show that despite a larger model size, the resulting architecture could still correspond to a hypothesis space with lower complexity, implying superior generalizability for equivariant networks. - oai:arXiv.org:2512.09673v1 + Hybrid Physics-ML Model for Forward Osmosis Flux with Complete Uncertainty Quantification + https://arxiv.org/abs/2512.10457 + arXiv:2512.10457v1 Announce Type: cross +Abstract: Forward Osmosis (FO) is a promising low-energy membrane separation technology, but challenges in accurately modelling its water flux (Jw) persist due to complex internal mass transfer phenomena. Traditional mechanistic models struggle with empirical parameter variability, while purely data-driven models lack physical consistency and rigorous uncertainty quantification (UQ). This study introduces a novel Robust Hybrid Physics-ML framework employing Gaussian Process Regression (GPR) for highly accurate, uncertainty-aware Jw prediction. The core innovation lies in training the GPR on the residual error between the detailed, non-linear FO physical model prediction (Jw_physical) and the experimental water flux (Jw_actual). Crucially, we implement a full UQ methodology by decomposing the total predictive variance (sigma2_total) into model uncertainty (epistemic, from GPR's posterior variance) and input uncertainty (aleatoric, analytically propagated via the Delta method for multi-variate correlated inputs). Leveraging the inherent strength of GPR in low-data regimes, the model, trained on a meagre 120 data points, achieved a state-of-the-art Mean Absolute Percentage Error (MAPE) of 0.26% and an R2 of 0.999 on the independent test data, validating a truly robust and reliable surrogate model for advanced FO process optimization and digital twin development. + oai:arXiv.org:2512.10457v1 cs.LG - cs.AI - cs.NE stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 cross http://creativecommons.org/licenses/by/4.0/ - Yuzhu Chen, Tian Qin, Xinmei Tian, Fengxiang He, Dacheng Tao + Shiv Ratn, Shivang Rampriyan, Bahni Ray + + + Supporting Migration Policies with Forecasts: Illegal Border Crossings in Europe through a Mixed Approach + https://arxiv.org/abs/2512.10633 + arXiv:2512.10633v1 Announce Type: cross +Abstract: This paper presents a mixed-methodology to forecast illegal border crossings in Europe across five key migratory routes, with a one-year time horizon. The methodology integrates machine learning techniques with qualitative insights from migration experts. This approach aims at improving the predictive capacity of data-driven models through the inclusion of a human-assessed covariate, an innovation that addresses challenges posed by sudden shifts in migration patterns and limitations in traditional datasets. The proposed methodology responds directly to the forecasting needs outlined in the EU Pact on Migration and Asylum, supporting the Asylum and Migration Management Regulation (AMMR). It is designed to provide policy-relevant forecasts that inform strategic decisions, early warning systems, and solidarity mechanisms among EU Member States. By joining data-driven modeling with expert judgment, this work aligns with existing academic recommendations and introduces a novel operational tool tailored for EU migration governance. The methodology is tested and validated with known data to demonstrate its applicability and reliability in migration-related policy context. + oai:arXiv.org:2512.10633v1 + cs.LG + cs.SI + stat.AP + Fri, 12 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + C. Bosco, U. Minora, D. de Rigo, J. Pingsdorf, R. Cortinovis - Innovation ARIMA models application to predict pressure variations in water supply networks with open-loop control. Case study in Noja (Cantabria, Spain) - https://arxiv.org/abs/2512.09717 - arXiv:2512.09717v1 Announce Type: cross -Abstract: Water utilities are increasingly concerned about losses, leaks, and illegal connections in their distribution networks. Pressure control is typically managed through pressure reducing valves with electrically controlled actuators based on predefined tables according to the pressure at the critical point control. This openloop control method lacks direct feedback between the PRV and CPC, making it challenging to distinguish whether pressure variations originate from normal head losses or abnormal network conditions. Unlike traditional applications of ARIMA focused on water demand forecasting, this study explores its novel use in pressure management within distribution networks, aiming to predict P3 pressure based on head losses across a defined hydraulic sector. To achieve this objective, a predictive model based on the Box-Jenkins methodology and its variations is implemented to analyse time series data. An action path is established to determine the optimal model ARIMA, ARMA, ARMAX, etc. which is subsequently validated using real operational data from Noja, a coastal town in northern Spain characterized by significant seasonal population fluctuations. By accurately forecasting CPC pressure, this system enhances the detection of anomalous patterns, enabling more efficient network pressure management. The study demonstrates the potential of advanced modelling techniques in optimizing water distribution networks, providing valuable insights to improve system efficiency, reliability, and sustainability in urban environments. - oai:arXiv.org:2512.09717v1 - physics.app-ph + Further Statistical Study of NISQ Experiments + https://arxiv.org/abs/2512.10722 + arXiv:2512.10722v1 Announce Type: cross +Abstract: We revisit and extend some topics that we studied in our previous works (Rinott, Kalai and Shoham 2022; Kalai, Rinott and Shoham, 2023,2024) regarding the Google 2019 "quantum supremacy" experiment. We extend our analysis of the prediction based on Google's digital error model (Formula (77)), based on more detailed data provided by Google. We also provide some preliminary analysis for a few other NISQ experiments. + oai:arXiv.org:2512.10722v1 + quant-ph + cs.CC stat.AP - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 cross http://creativecommons.org/licenses/by/4.0/ - 10.1016/j.nexus.2025.100423 - Energy Nexus 18 (2025) 100423 - David Munoz-Rodriguez, Manuel J. Gonzalez-Ortega, Maria-Jesus Aguilera-Urena, Andres Ortega-Ballesteros, Alberto-Jesus Perea-Moreno + Gil Kalai, Tomer Shoham, Carsten Voelkmann - New Approximation Results and Optimal Estimation for Fully Connected Deep Neural Networks - https://arxiv.org/abs/2512.09853 - arXiv:2512.09853v1 Announce Type: cross -Abstract: \citet{farrell2021deep} establish non-asymptotic high-probability bounds for general deep feedforward neural network (with rectified linear unit activation function) estimators, with \citet[Theorem 1]{farrell2021deep} achieving a suboptimal convergence rate for fully connected feedforward networks. The authors suggest that improved approximation of fully connected networks could yield sharper versions of \citet[Theorem 1]{farrell2021deep} without altering the theoretical framework. By deriving approximation bounds specifically for a narrower fully connected deep neural network, this note demonstrates that \citet[Theorem 1]{farrell2021deep} can be improved to achieve an optimal rate (up to a logarithmic factor). Furthermore, this note briefly shows that deep neural network estimators can mitigate the curse of dimensionality for functions with compositional structure and functions defined on manifolds. - oai:arXiv.org:2512.09853v1 - econ.EM + What matters for Representation Alignment: Global Information or Spatial Structure? + https://arxiv.org/abs/2512.10794 + arXiv:2512.10794v1 Announce Type: cross +Abstract: Representation alignment (REPA) guides generative training by distilling representations from a strong, pretrained vision encoder to intermediate diffusion features. We investigate a fundamental question: what aspect of the target representation matters for generation, its \textit{global} \revision{semantic} information (e.g., measured by ImageNet-1K accuracy) or its spatial structure (i.e. pairwise cosine similarity between patch tokens)? Prevalent wisdom holds that stronger global semantic performance leads to better generation as a target representation. To study this, we first perform a large-scale empirical analysis across 27 different vision encoders and different model scales. The results are surprising; spatial structure, rather than global performance, drives the generation performance of a target representation. To further study this, we introduce two straightforward modifications, which specifically accentuate the transfer of \emph{spatial} information. We replace the standard MLP projection layer in REPA with a simple convolution layer and introduce a spatial normalization layer for the external representation. Surprisingly, our simple method (implemented in $<$4 lines of code), termed iREPA, consistently improves convergence speed of REPA, across a diverse set of vision encoders, model sizes, and training variants (such as REPA, REPA-E, Meanflow, JiT etc). %, etc. Our work motivates revisiting the fundamental working mechanism of representational alignment and how it can be leveraged for improved training of generative models. The code and project page are available at https://end2end-diffusion.github.io/irepa + oai:arXiv.org:2512.10794v1 + cs.CV + cs.AI + cs.GR + cs.LG stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 cross http://creativecommons.org/licenses/by/4.0/ - Zhaoji Tang + Jaskirat Singh, Xingjian Leng, Zongze Wu, Liang Zheng, Richard Zhang, Eli Shechtman, Saining Xie - HPM-KD: Hierarchical Progressive Multi-Teacher Framework for Knowledge Distillation and Efficient Model Compression - https://arxiv.org/abs/2512.09886 - arXiv:2512.09886v1 Announce Type: cross -Abstract: Knowledge Distillation (KD) has emerged as a promising technique for model compression but faces critical limitations: (1) sensitivity to hyperparameters requiring extensive manual tuning, (2) capacity gap when distilling from very large teachers to small students, (3) suboptimal coordination in multi-teacher scenarios, and (4) inefficient use of computational resources. We present \textbf{HPM-KD}, a framework that integrates six synergistic components: (i) Adaptive Configuration Manager via meta-learning that eliminates manual hyperparameter tuning, (ii) Progressive Distillation Chain with automatically determined intermediate models, (iii) Attention-Weighted Multi-Teacher Ensemble that learns dynamic per-sample weights, (iv) Meta-Learned Temperature Scheduler that adapts temperature throughout training, (v) Parallel Processing Pipeline with intelligent load balancing, and (vi) Shared Optimization Memory for cross-experiment reuse. Experiments on CIFAR-10, CIFAR-100, and tabular datasets demonstrate that HPM-KD: achieves 10x-15x compression while maintaining 85% accuracy retention, eliminates the need for manual tuning, and reduces training time by 30-40% via parallelization. Ablation studies confirm independent contribution of each component (0.10-0.98 pp). HPM-KD is available as part of the open-source DeepBridge library. - oai:arXiv.org:2512.09886v1 + Extrapolation of Periodic Functions Using Binary Encoding of Continuous Numerical Values + https://arxiv.org/abs/2512.10817 + arXiv:2512.10817v1 Announce Type: cross +Abstract: We report the discovery that binary encoding allows neural networks to extrapolate periodic functions beyond their training bounds. We introduce Normalized Base-2 Encoding (NB2E) as a method for encoding continuous numerical values and demonstrate that, using this input encoding, vanilla multi-layer perceptrons (MLP) successfully extrapolate diverse periodic signals without prior knowledge of their functional form. Internal activation analysis reveals that NB2E induces bit-phase representations, enabling MLPs to learn and extrapolate signal structure independently of position. + oai:arXiv.org:2512.10817v1 cs.LG - stat.AP - Thu, 11 Dec 2025 00:00:00 -0500 + cs.AI + cs.CV + stat.ML + Fri, 12 Dec 2025 00:00:00 -0500 cross - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Gustavo Coelho Haase, Paulo Henrique Dourado da Silva + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Brian P. Powell, Jordan A. Caraballo-Vega, Mark L. Carroll, Thomas Maxwell, Andrew Ptak, Greg Olmschenk, Jorge Martinez-Palomera - Provably Learning from Modern Language Models via Low Logit Rank - https://arxiv.org/abs/2512.09892 - arXiv:2512.09892v1 Announce Type: cross -Abstract: While modern language models and their inner workings are incredibly complex, recent work (Golowich, Liu & Shetty; 2025) has proposed a simple and potentially tractable abstraction for them through the observation that empirically, these language models all seem to have approximately low logit rank. Roughly, this means that a matrix formed by the model's log probabilities of various tokens conditioned on certain sequences of tokens is well approximated by a low rank matrix. - In this paper, our focus is on understanding how this structure can be exploited algorithmically for obtaining provable learning guarantees. Since low logit rank models can encode hard-to-learn distributions such as noisy parities, we study a query learning model with logit queries that reflects the access model for common APIs. Our main result is an efficient algorithm for learning any approximately low logit rank model from queries. We emphasize that our structural assumption closely reflects the behavior that is empirically observed in modern language models. Thus, our result gives what we believe is the first end-to-end learning guarantee for a generative model that plausibly captures modern language models. - oai:arXiv.org:2512.09892v1 + Generative Modeling from Black-box Corruptions via Self-Consistent Stochastic Interpolants + https://arxiv.org/abs/2512.10857 + arXiv:2512.10857v1 Announce Type: cross +Abstract: Transport-based methods have emerged as a leading paradigm for building generative models from large, clean datasets. However, in many scientific and engineering domains, clean data are often unavailable: instead, we only observe measurements corrupted through a noisy, ill-conditioned channel. A generative model for the original data thus requires solving an inverse problem at the level of distributions. In this work, we introduce a novel approach to this task based on Stochastic Interpolants: we iteratively update a transport map between corrupted and clean data samples using only access to the corrupted dataset as well as black box access to the corruption channel. Under appropriate conditions, this iterative procedure converges towards a self-consistent transport map that effectively inverts the corruption channel, thus enabling a generative model for the clean data. We refer to the resulting method as the self-consistent stochastic interpolant (SCSI). It (i) is computationally efficient compared to variational alternatives, (ii) highly flexible, handling arbitrary nonlinear forward models with only black-box access, and (iii) enjoys theoretical guarantees. We demonstrate superior performance on inverse problems in natural image processing and scientific reconstruction, and establish convergence guarantees of the scheme under appropriate assumptions. + oai:arXiv.org:2512.10857v1 cs.LG cs.AI - cs.DS stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Noah Golowich, Allen Liu, Abhishek Shetty + http://creativecommons.org/licenses/by/4.0/ + Chirag Modi, Jiequn Han, Eric Vanden-Eijnden, Joan Bruna - Analytic queueing model for ambulance services - https://arxiv.org/abs/1602.06579 - arXiv:1602.06579v2 Announce Type: replace -Abstract: We present predictive tools to calculate the number of ambulances needed according to demand of entrance calls and time of service. Our analysis discriminates between emergency and non-urgent calls. First, we consider the nonstationary regime where we apply previous results of first-passage time of one dimensional random walks. Then, we reconsider the stationary regime with a detailed discussion of the conditional probabilities and we discuss the key performance indicators. - oai:arXiv.org:1602.06579v2 - stat.AP - Thu, 11 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-sa/4.0/ - Pedro A. Pury + Decoupled Q-Chunking + https://arxiv.org/abs/2512.10926 + arXiv:2512.10926v1 Announce Type: cross +Abstract: Temporal-difference (TD) methods learn state and action values efficiently by bootstrapping from their own future value predictions, but such a self-bootstrapping mechanism is prone to bootstrapping bias, where the errors in the value targets accumulate across steps and result in biased value estimates. Recent work has proposed to use chunked critics, which estimate the value of short action sequences ("chunks") rather than individual actions, speeding up value backup. However, extracting policies from chunked critics is challenging: policies must output the entire action chunk open-loop, which can be sub-optimal for environments that require policy reactivity and also challenging to model especially when the chunk length grows. Our key insight is to decouple the chunk length of the critic from that of the policy, allowing the policy to operate over shorter action chunks. We propose a novel algorithm that achieves this by optimizing the policy against a distilled critic for partial action chunks, constructed by optimistically backing up from the original chunked critic to approximate the maximum value achievable when a partial action chunk is extended to a complete one. This design retains the benefits of multi-step value propagation while sidestepping both the open-loop sub-optimality and the difficulty of learning action chunking policies for long action chunks. We evaluate our method on challenging, long-horizon offline goal-conditioned tasks and show that it reliably outperforms prior methods. Code: github.com/ColinQiyangLi/dqc. + oai:arXiv.org:2512.10926v1 + cs.LG + cs.AI + cs.RO + stat.ML + Fri, 12 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Qiyang Li, Seohong Park, Sergey Levine - Detecting and Localizing Anomalous Cliques in Inhomogeneous Networks using Egonets - https://arxiv.org/abs/1807.08925 - arXiv:1807.08925v3 Announce Type: replace -Abstract: Cliques, or fully connected subgraphs, are among the most important and well-studied graph motifs in network science. We consider the problem of finding a statisti- cally anomalous clique hidden in a large network. There are two parts to this problem: (1) detection, i.e., determining whether an anomalous clique is present, and (2) localization, i.e., determining which vertices of the network constitute the detected clique. While this problem has been extensively studied under the homogeneous Erdos-Renyi model, little progress has been made beyond this simple setting, and no existing method can perform detection and localization in inhomogeneous networks within finite time. To address this gap, we first show that in homogeneous networks, the anomalousness of a clique depends solely on its size. This property does not carry over to inhomogeneous networks, where the identity of the vertices forming the clique plays a critical role, and a smaller clique can be more anomalous than a larger one. Building on this insight, we propose a unified method for clique detection and localization based on a class of subgraphs called egonets. The proposed method generalizes to a wide variety of inhomogeneous network models and is naturally amenable to parallel computing. We establish the theoretical properties of the proposed method and demonstrate its empirical performance through simulation studies and application to two real world networks. - oai:arXiv.org:1807.08925v3 + Marginal Interventional Effects + https://arxiv.org/abs/2206.10717 + arXiv:2206.10717v2 Announce Type: replace +Abstract: Conventional causal estimands, such as the average treatment effect (ATE), capture how the mean outcome in a population or subpopulation would change if all units were assigned to treatment versus control. Real-world policy changes, however, are often incremental, changing treatment status for only a small segment of the population -- those at or near the "margin of participation." To formalize this idea, two parallel literatures in economics and in statistics and epidemiology have developed what we call interventional effects. In this article, we unify these perspectives by defining the interventional effect (IE) as the per capita effect of a treatment intervention on an outcome of interest, and the marginal interventional effect (MIE) as its limit when the intervention size approaches zero. The IE and MIE can be viewed as unconditional counterparts of the policy-relevant treatment effect (PRTE) and marginal PRTE (MPRTE) from the economics literature. Unlike the PRTE and MPRTE, however, the IE and MIE are defined without reliance on a latent index model and can be identified either under unconfoundedness or with instrumental variables. For both scenarios, we show that MIEs are typically identified without the strong positivity assumption required of the ATE, highlight several "stylized interventions" that may be particularly relevant for policy analysis, discuss several parametric and semiparametric estimation strategies, and illustrate the proposed methods with an empirical example. + oai:arXiv.org:2206.10717v2 stat.ME - stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Subhankar Bhadra, Srijan Sengupta + Xiang Zhou, Aleksei Opacic - Quasi Model-Assisted Estimators under Nonresponse in Sample Surveys - https://arxiv.org/abs/2208.04621 - arXiv:2208.04621v2 Announce Type: replace -Abstract: In the presence of auxiliary information, model-assisted estimators rely on a working model linking the variable of interest to the auxiliary variables in order to improve the efficiency of the Horvitz-Thompson estimator. Model-assisted estimators cannot be directly computed with nonresponse since the values of the variable of interest is missing for a part of the sample units. In this article, we present and study a class of quasi-model-assisted estimators that extend model-assisted estimators to settings with non-ignorable nonresponse. These estimators combine a working model and a response model. The former is used to improve the efficiency, the latter to reweight the nonrespondents. A wide range of statistical learning methods can be used to estimate either of these models. We show that several well-known existing estimators are particular cases of quasi-model-assisted estimators. We examine the behavior of these estimators through a simulation study. The results illustrate how these estimators remain competitive in terms of bias and variance when one of the two models is poorly specified. - oai:arXiv.org:2208.04621v2 + Augmented match weighted estimators for average treatment effects + https://arxiv.org/abs/2305.14255 + arXiv:2305.14255v2 Announce Type: replace +Abstract: Propensity score matching (PSM) and augmented inverse propensity weighting (AIPW) are widely used in observational studies to estimate causal effects. The two approaches present complementary features. The AIPW estimator is doubly robust and locally efficient but can be unstable when the propensity scores are close to zero or one due to weighting by the inverse of the propensity score. On the other hand, PSM circumvents the instability of propensity score weighting but it hinges on the correctness of the propensity score model and cannot attain the semiparametric efficiency bound. Besides, the fixed number of matches, K, renders PSM nonsmooth and thus invalidates standard nonparametric bootstrap inference. + This article presents novel augmented match weighted (AMW) estimators that combine the advantages of matching and weighting estimators. AMW adheres to the form of AIPW for its double robustness and local efficiency but it mitigates the instability due to weighting. We replace inverse propensity weights with matching weights resulting from PSM with unfixed K. Meanwhile, we propose a new cross-validation procedure to select K that minimizes the mean squared error anchored around an unbiased estimator of the causal estimand. Besides, we derive the limiting distribution for the AMW estimators showing that they enjoy the double robustness property and can achieve the semiparametric efficiency bound if both nuisance models are correct. As a byproduct of unfixed K which smooths the AMW estimators, nonparametric bootstrap can be adopted for variance estimation and inference. Furthermore, simulation studies and real data applications support that the AMW estimators are stable with extreme propensity scores and their variances can be obtained by naive bootstrap. + oai:arXiv.org:2305.14255v2 stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Caren Hasler, Esther Eustache + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Tanchumin Xu, Yunshu Zhang, Shu Yang - Nonparametric estimation of the job-size distribution for an M/G/1 queue with Poisson sampling - https://arxiv.org/abs/2307.10116 - arXiv:2307.10116v4 Announce Type: replace -Abstract: This work presents a non-parametric estimator for the cumulative distribution function (CDF) of the job-size distribution for a queue with compound Poisson input. The workload process is observed according to an independent Poisson sampling process. The nonparametric estimator is constructed by first estimating the characteristic function (CF) and then applying an inversion formula. The convergence rate of the CF estimator at $s$ is shown to be of the order of $s^2/n$, where $n$ is the sample size. This convergence rate is leveraged to explore the bias-variance tradeoff of the inversion estimator. It is demonstrated that within a certain class of continuous distributions, the risk, in terms of MSE, is uniformly bounded by $C n^{-\frac{\eta}{1+\eta}}$, where $C$ is a positive constant and the parameter $\eta>0$ depends on the smoothness of the underlying class of distributions. A heuristic method is further developed to address the case of an unknown rate of the compound Poisson input process. - oai:arXiv.org:2307.10116v4 - math.ST - math.PR - stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + A Bayesian Framework for Multivariate Differential Analysis + https://arxiv.org/abs/2307.08975 + arXiv:2307.08975v3 Announce Type: replace +Abstract: Differential analysis is a routine procedure in the statistical analysis toolbox across many applied fields, including quantitative proteomics, the main illustration of the present paper. The state-of-the-art limma approach uses a hierarchical formulation with moderated-variance estimators for each analyte directly injected into the t-statistic. While standard hypothesis testing strategies are recognised for their low computational cost, allowing for quick extraction of the most differential among thousands of elements, they generally overlook key aspects such as handling missing values, inter-element correlations, and uncertainty quantification. The present paper proposes a fully Bayesian framework for differential analysis, leveraging a conjugate hierarchical formulation for both the mean and the variance. Inference is performed by computing the posterior distribution of compared experimental conditions and sampling from the distribution of differences. This approach provides well-calibrated uncertainty quantification at a similar computational cost as hypothesis testing by leveraging closed-form equations. Furthermore, a natural extension enables multivariate differential analysis that accounts for possible inter-element correlations. We also demonstrate that, in this Bayesian treatment, missing data should generally be ignored in univariate settings, and further derive a tailored approximation that handles multiple imputation for the multivariate setting. We argue that probabilistic statements in terms of effect size and associated uncertainty are better suited to practical decision-making. Therefore, we finally propose simple and intuitive inference criteria, such as the overlap coefficient, which express group similarity as a probability rather than traditional, and often misleading, p-values. + oai:arXiv.org:2307.08975v3 + stat.ME + stat.AP + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Liron Ravner + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Marie Chion, Arthur Leroy - High-dimensional Newey-Powell Test Via Approximate Message Passing - https://arxiv.org/abs/2311.05056 - arXiv:2311.05056v2 Announce Type: replace -Abstract: We propose a high-dimensional extension of the heteroscedasticity test proposed in Newey and Powell (1987). Our test is based on expectile regression in the proportional asymptotic regime where n/p \to \delta \in (0,1]. The asymptotic analysis of the test statistic uses the Approximate Message Passing (AMP) algorithm, from which we obtain the limiting distribution of the test and establish its asymptotic power. The numerical performance of the test is validated through an extensive simulation study. As real-data applications, we present the analysis based on ``international economic growth" data (Belloni et al., 2011), which is found to be homoscedastic, and ``supermarket" data (Lan et al., 2016), which is found to be heteroscedastic. - oai:arXiv.org:2311.05056v2 + New Methods for Network Count Time Series + https://arxiv.org/abs/2312.01944 + arXiv:2312.01944v2 Announce Type: replace +Abstract: The original generalized network autoregressive models are poor for modelling count data as they are based on the additive and constant noise assumptions, which is usually inappropriate for count data. We introduce two new models (GNARI and NGNAR) for count network time series by adapting and extending existing count-valued time series models. We present results on the statistical and asymptotic properties of our new models and their estimates obtained by conditional least squares and maximum likelihood. We conduct two simulation studies that verify successful parameter estimation for both models and conduct a further study that shows, for negative network parameters, that our NGNAR model outperforms existing models and our other GNARI model in terms of predictive performance. We model a network time series constructed from COVID-positive counts for counties in New York State during 2020-22 and show that our new models perform considerably better than existing methods for this problem. + oai:arXiv.org:2312.01944v2 stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jing Zhou, Hui Zou + http://creativecommons.org/licenses/by/4.0/ + Hengxu Liu, Guy Nason - LASPATED: A Library for the Analysis of Spatio-Temporal Discrete Data (User Manual) - https://arxiv.org/abs/2407.13889 - arXiv:2407.13889v3 Announce Type: replace -Abstract: This is the User Manual of the LASPATED library. This library is available on GitHub (at https://github.com/vguigues/LASPATED)) and provides a set of tools to analyze spatiotemporal data. A video tutorial for this library is available on Youtube. It is made of a Python package for time and space discretizations and of two packages (one in Matlab and one in C++) implementing the calibration of the probabilistic models for stochastic spatio-temporal data proposed in the companion paper arXiv:2203.16371v2. - oai:arXiv.org:2407.13889v3 - stat.CO - Thu, 11 Dec 2025 00:00:00 -0500 + Bayesian Level Set Clustering + https://arxiv.org/abs/2403.04912 + arXiv:2403.04912v2 Announce Type: replace +Abstract: Classically, Bayesian clustering interprets each component of a mixture model as a cluster. The inferred clustering posterior is highly sensitive to any inaccuracies in the kernel within each component. As this kernel is made more flexible, problems arise in identifying the underlying clusters in the data. To address this pitfall, this article proposes a fundamentally different approach to Bayesian clustering that decouples the problems of clustering and flexible modeling of the data density $f$. Starting with an arbitrary Bayesian model for $f$ and a loss function for defining clusters based on $f$, we develop a Bayesian decision-theoretic framework for density-based clustering. Within this framework, we develop a Bayesian level set clustering method to cluster data into connected components of a level set of $f$. We provide theoretical support, including clustering consistency, and highlight performance in a variety of simulated examples. An application to astronomical data illustrates improvements over the popular DBSCAN algorithm in terms of accuracy, insensitivity to tuning parameters, and providing uncertainty quantification. + oai:arXiv.org:2403.04912v2 + stat.ME + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-sa/4.0/ - Vincent Guigues, Anton J. Kleywegt, Giovanni Amorim, Andre Krauss, Victor Hugo Nascimento + http://creativecommons.org/licenses/by/4.0/ + David Buch, Miheer Dewaskar, David B. Dunson - Bayesian Statistical Modeling in Action for Estimation and Forecasting in Low- and Middle-income Countries: The Case of the Family Planning Estimation Tool - https://arxiv.org/abs/2501.00007 - arXiv:2501.00007v2 Announce Type: replace -Abstract: The Family Planning Estimation Tool (FPET) is used in low- and middle-income countries to produce estimates and short-term forecasts of family planning indicators, such as modern contraceptive use and unmet need for contraceptives. Estimates are obtained via a Bayesian statistical model that is fitted to country-specific data from surveys and service statistics data. The model has evolved over the last decade based on user inputs. - In this paper we summarize the main features of the statistical model used in FPET and introduce recent updates related to capturing contraceptive transitions, fitting to survey data that may be error prone, and the use of service statistics data. We assess model performance through a validation exercise and find that FPET is reasonably well calibrated. - We use our experience with FPET to briefly discuss lessons learned and open challenges related to the broader field of statistical modeling for monitoring of demographic and global health indicators. - oai:arXiv.org:2501.00007v2 + Evaluating Organizational Effectiveness: A New Strategy to Leverage Multisite Randomized Trials for Valid Assessment + https://arxiv.org/abs/2407.18360 + arXiv:2407.18360v4 Announce Type: replace +Abstract: Determining which organizations are more effective in implementing an intervention program is essential for theoretically and empirically characterizing exemplary practice and for intervening to enhance the capacity of ineffective ones. Yet sites differ in their local ecological conditions including client composition, alternative programs, and community context. Applying the causal inference framework, this study proposes a formal mathematical definition for the local relative effectiveness of an organization attributable solely to malleable organizational practice. Capitalizing on multisite randomized trials, the identification leverages observed control group outcomes that capture some of the confounding impacts of otherwise unmeasured contextual variation. We propose a two-step mixed-effects modeling (2SME) procedure that adjusts for pre-existing between-site variation. A series of Monte Carlo simulations reveals its superior performance in comparison with conventional methods. We apply the new strategy to an evaluation of Job Corps centers nationwide serving disadvantaged youths. + oai:arXiv.org:2407.18360v4 stat.AP - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-sa/4.0/ - Leontine Alkema, Herbert Susmann, Evan Ray, Shauna Mooney, Niamh Cahill, Kristin Bietsch, A. A. Jayachandran, Rogers Kagimu, Priya Emmart, Zenon Mujani, Khan Muhammad, Brighton Muzavazi, Rebecca Rosenberg, John Stover, Emily Sonneveldt + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Guanglei Hong (University of Chicago), Jonah Deutsch (Mathematica), Peter Kress (Mathematica), Jose Eos Trinidad (University of California-Berkeley), Zhengyan Xu (University of Pennsylvania) - Sampling from density power divergence-based generalized posterior distribution via stochastic optimization - https://arxiv.org/abs/2501.07790 - arXiv:2501.07790v2 Announce Type: replace -Abstract: Robust Bayesian inference using density power divergence (DPD) has emerged as a promising approach for handling outliers in statistical estimation. Although the DPD-based posterior offers theoretical guarantees of robustness, its practical implementation faces significant computational challenges, particularly for general parametric models with intractable integral terms. These challenges are specifically pronounced in high-dimensional settings, where traditional numerical integration methods are inadequate and computationally expensive. Herein, we propose a novel {approximate} sampling methodology that addresses these limitations by integrating the loss-likelihood bootstrap with a stochastic gradient descent algorithm specifically designed for DPD-based estimation. Our approach enables efficient and scalable sampling from DPD-based posteriors for a broad class of parametric models, including those with intractable integrals. We further extend it to accommodate generalized linear models. Through comprehensive simulation studies, we demonstrate that our method efficiently samples from DPD-based posteriors, offering superior computational scalability compared to conventional methods, specifically in high-dimensional settings. The results also highlight its ability to handle complex parametric models with intractable integral terms. - oai:arXiv.org:2501.07790v2 + On Relative Cumulative Residual Information Measure and Its Applications + https://arxiv.org/abs/2410.00125 + arXiv:2410.00125v3 Announce Type: replace +Abstract: In this paper, we develop a relative cumulative residual information measure (RCRI) that aims to quantify the divergence between two survival functions. The dynamic relative cumulative residual information (DRCRI) measure is also introduced. We establish some characterization results under the proportional hazards model assumption. Additionally, we obtained the non-parametric estimators of RCRI and DRCRI measures based on the kernel density type estimator for the survival function. The effectiveness of the estimators are assessed through an extensive Monte Carlo simulation study. We consider data from the third Gaia data release (Gaia DR3) to demonstrate the use of the proposed measure. For this study, we have collected epoch photometry data for the objects Gaia DR3 4111834567779557376 and Gaia DR3 5090605830056251776. The RCRI-based image analysis is conducted using Chest X-ray data from the publicly available dataset. + oai:arXiv.org:2410.00125v3 stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Naruki Sonobe, Tomotaka Momozaki, Tomoyuki Nakagawa + http://creativecommons.org/publicdomain/zero/1.0/ + Mary Andrews, Smitha S, Sudheesh K. Kattumannil - Dynamic Pricing in the Linear Valuation Model using Shape Constraints - https://arxiv.org/abs/2502.05776 - arXiv:2502.05776v4 Announce Type: replace -Abstract: We propose a shape-constrained approach to dynamic pricing for censored data in the linear valuation model eliminating the need for tuning parameters commonly required by existing methods. Previous works have addressed the challenge of unknown market noise distribution $F_0$ using strategies ranging from kernel methods to reinforcement learning algorithms, such as bandit techniques and upper confidence bounds (UCB), under the assumption that $F_0$ satisfies Lipschitz (or stronger) conditions. In contrast, our method relies on isotonic regression under the weaker assumption that $F_0$ is $\alpha$-H\"older continuous for some $\alpha \in (0,1]$, for which we derive a regret upper bound. Simulations and experiments with real-world data obtained by Welltower Inc (a major healthcare Real Estate Investment Trust) consistently demonstrate that our method attains lower empirical regret in comparison to several existing methods in the literature while offering the advantage of being tuning-parameter free. - oai:arXiv.org:2502.05776v4 - stat.ML - cs.LG - Thu, 11 Dec 2025 00:00:00 -0500 + The causal effects of modified treatment policies under network interference + https://arxiv.org/abs/2412.02105 + arXiv:2412.02105v3 Announce Type: replace +Abstract: Modified treatment policies are a widely applicable class of interventions useful for studying the causal effects of continuous exposures. Approaches to evaluating their causal effects assume no interference, meaning that such effects cannot be learned from data in settings where the exposure of one unit affects the outcomes of others, as is common in spatial or network data. We introduce a new class of intervention, induced modified treatment policies, which we show identify such causal effects in the presence of network interference. Building on recent developments for causal inference in networks, we provide flexible, semi-parametric efficient estimators of the statistical estimand. Numerical experiments demonstrate that an induced modified treatment policy can eliminate the causal, or identification, bias that results from network interference. We use the methodology developed to evaluate the effect of zero-emission vehicle uptake on air pollution in California, strengthening prior evidence. + oai:arXiv.org:2412.02105v3 + stat.ME + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Daniele Bracale, Moulinath Banerjee, Yuekai Sun, Kevin Stoll, Salam Turki + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Salvador V. Balkus, Scott W. Delaney, Nima S. Hejazi - Estimation of Treatment Effects based on Kernel Matching - https://arxiv.org/abs/2502.10958 - arXiv:2502.10958v2 Announce Type: replace -Abstract: Kernel matching is a widely used technique for estimating treatment effects, particularly valuable in observational studies where randomized controlled trials are not feasible. While kernel-matching approaches have demonstrated practical advantages in exploiting similarities between treated and control units, their theoretical properties have remained only partially explored. In this paper, we make a key contribution by establishing the asymptotic normality and consistency of kernel-matching estimators for both the average treatment effect (ATE) and the average treatment effect on the treated (ATT) through influence function techniques, thereby providing a rigorous theoretical foundation for their use in causal inference. Furthermore, we derive the asymptotic distributions of the ATE and ATT estimators when the propensity score is estimated rather than known, extending the theoretical guarantees to the practically relevant cases. Through extensive Monte Carlo simulations, the estimators exhibit consistently improved performance over standard treatment-effect estimators. We further illustrate the method by analyzing the National Supported Work Demonstration job-training data with the kernel-matching estimator. - oai:arXiv.org:2502.10958v2 + Energy Based Equality of Distributions Testing for Compositional Data + https://arxiv.org/abs/2412.05199 + arXiv:2412.05199v3 Announce Type: replace +Abstract: Not many tests exist for testing the equality for two or more multivariate distributions with compositional data, perhaps due to their constrained sample space. At the moment, there is only one test suggested that relies upon random projections. We propose a novel test termed {\alpha}-Energy Based Test ({\alpha}-EBT) to compare the multivariate distributions of two (or more) compositional data sets. Similar to the aforementioned test, the new test makes no parametric assumptions about the data and, based on simulation studies it exhibits higher power levels. + oai:arXiv.org:2412.05199v3 stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chong Ding, Zheng Li, Hon Keung Tony Ng, Wei Gao + Volkan Sevinc, Michail Tsagris - Finite-Sample Analysis of Policy Evaluation for Robust Average Reward Reinforcement Learning - https://arxiv.org/abs/2502.16816 - arXiv:2502.16816v4 Announce Type: replace -Abstract: We present the first finite-sample analysis of policy evaluation in robust average-reward Markov Decision Processes (MDPs). Prior work in this setting have established only asymptotic convergence guarantees, leaving open the question of sample complexity. In this work, we address this gap by showing that the robust Bellman operator is a contraction under a carefully constructed semi-norm, and developing a stochastic approximation framework with controlled bias. Our approach builds upon Multi-Level Monte Carlo (MLMC) techniques to estimate the robust Bellman operator efficiently. To overcome the infinite expected sample complexity inherent in standard MLMC, we introduce a truncation mechanism based on a geometric distribution, ensuring a finite expected sample complexity while maintaining a small bias that decays exponentially with the truncation level. Our method achieves the order-optimal sample complexity of $\tilde{\mathcal{O}}(\epsilon^{-2})$ for robust policy evaluation and robust average reward estimation, marking a significant advancement in robust reinforcement learning theory. - oai:arXiv.org:2502.16816v4 + Randomization Tests for Conditional Group Symmetry + https://arxiv.org/abs/2412.14391 + arXiv:2412.14391v2 Announce Type: replace +Abstract: Symmetry plays a central role in the sciences, machine learning, and statistics. While statistical tests for the presence of distributional invariance with respect to groups have a long history, tests for conditional symmetry in the form of equivariance or conditional invariance are absent from the literature. This work initiates the study of nonparametric randomization tests for symmetry (invariance or equivariance) of a conditional distribution under the action of a specified locally compact group. We develop a general framework for randomization tests with finite-sample Type I error control and, using kernel methods, implement tests with finite-sample power lower bounds. We also describe and implement approximate versions of the tests, which are asymptotically consistent. We study their properties empirically using synthetic examples and applications to testing for symmetry in two problems from high-energy particle physics. + oai:arXiv.org:2412.14391v2 + stat.ME + math.ST stat.ML - cs.LG - Thu, 11 Dec 2025 00:00:00 -0500 + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yang Xu, Washim Uddin Mondal, Vaneet Aggarwal + http://creativecommons.org/licenses/by/4.0/ + Kenny Chiu, Alex Sharp, Benjamin Bloem-Reddy - Revenue Maximization Under Sequential Price Competition Via The Estimation Of s-Concave Demand Functions - https://arxiv.org/abs/2503.16737 - arXiv:2503.16737v5 Announce Type: replace -Abstract: We consider price competition among multiple sellers over a selling horizon of $T$ periods. In each period, sellers simultaneously offer their prices (which are made public) and subsequently observe their respective demand (not made public). The demand function of each seller depends on all sellers' prices through a private, unknown, and nonlinear relationship. We propose a dynamic pricing policy that uses semi-parametric least-squares estimation and show that when the sellers employ our policy, their prices converge at a rate of $O(T^{-1/7})$ to the Nash equilibrium prices that sellers would reach if they were fully informed. Each seller incurs a regret of $O(T^{5/7})$ relative to a dynamic benchmark policy. A theoretical contribution of our work is proving the existence of equilibrium under shape-constrained demand functions via the concept of $s$-concavity and establishing regret bounds of our proposed policy. Technically, we also establish new concentration results for the least squares estimator under shape constraints. Our findings offer significant insights into dynamic competition-aware pricing and contribute to the broader study of non-parametric learning in strategic decision-making. - oai:arXiv.org:2503.16737v5 + Beyond Log-Concavity and Score Regularity: Improved Convergence Bounds for Score-Based Generative Models in W2-distance + https://arxiv.org/abs/2501.02298 + arXiv:2501.02298v5 Announce Type: replace +Abstract: Score-based Generative Models (SGMs) aim to sample from a target distribution by learning score functions using samples perturbed by Gaussian noise. Existing convergence bounds for SGMs in the W2-distance rely on stringent assumptions about the data distribution. In this work, we present a novel framework for analyzing W2-convergence in SGMs, significantly relaxing traditional assumptions such as log-concavity and score regularity. Leveraging the regularization properties of the Ornstein--Uhlenbeck (OU) process, we show that weak log-concavity of the data distribution evolves into log-concavity over time. This transition is rigorously quantified through a PDE-based analysis of the Hamilton--Jacobi--Bellman equation governing the log-density of the forward process. Moreover, we establish that the drift of the time-reversed OU process alternates between contractive and non-contractive regimes, reflecting the dynamics of concavity. Our approach circumvents the need for stringent regularity conditions on the score function and its estimators, relying instead on milder, more practical assumptions. We demonstrate the wide applicability of this framework through explicit computations on Gaussian mixture models, illustrating its versatility and potential for broader classes of data distributions. + oai:arXiv.org:2501.02298v5 stat.ML cs.LG - math.PR - math.ST - stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Daniele Bracale, Moulinath Banerjee, Cong Shi, Yuekai Sun + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Marta Gentiloni-Silveri, Antonio Ocello - Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems - https://arxiv.org/abs/2503.18309 - arXiv:2503.18309v4 Announce Type: replace -Abstract: Gaussian process state-space models (GPSSMs) offer a principled framework for learning and inference in nonlinear dynamical systems with uncertainty quantification. However, existing GPSSMs are limited by the use of multiple independent stationary Gaussian processes (GPs), leading to prohibitive computational and parametric complexity in high-dimensional settings and restricted modeling capacity for non-stationary dynamics. To address these challenges, we propose an efficient transformed Gaussian process state-space model (ETGPSSM) for scalable and flexible modeling of high-dimensional, non-stationary dynamical systems. Specifically, our ETGPSSM integrates a single shared GP with input-dependent normalizing flows, yielding an expressive non-stationary implicit process prior that can capture complex transition dynamics while significantly reducing model complexity. For the inference of the implicit process, we develop a variational inference algorithm that jointly approximates the posterior over the underlying GP and the neural network parameters defining the normalizing flows. To avoid explicit variational parameterization of the latent states, we further incorporate the ensemble Kalman filter (EnKF) into the variational framework, enabling accurate and efficient state estimation. Extensive empirical evaluations on synthetic and real-world datasets demonstrate the superior performance of our ETGPSSM in system dynamics learning, high-dimensional state estimation, and time-series forecasting, outperforming existing GPSSMs and neural network-based SSMs in terms of computational efficiency and accuracy. - oai:arXiv.org:2503.18309v4 + Sublinear Variational Optimization of Gaussian Mixture Models with Millions to Billions of Parameters + https://arxiv.org/abs/2501.12299 + arXiv:2501.12299v2 Announce Type: replace +Abstract: Gaussian Mixture Models (GMMs) range among the most frequently used models in machine learning. However, training large, general GMMs becomes computationally prohibitive for datasets that have many data points $N$ of high-dimensionality $D$. For GMMs with arbitrary covariances, we here derive a highly efficient variational approximation, which is then integrated with mixtures of factor analyzers (MFAs). For GMMs with $C$ components, our proposed algorithm substantially reduces runtime complexity from $\mathcal{O}(NCD^2)$ per iteration to a complexity scaling linearly with $D$ and sublinearly with $NC$. In numerical experiments, we first validate that the complexity reduction results in a sublinear scaling for the entire GMM optimization process. Second, we show on large-scale benchmarks that the sublinear algorithm results in speed-ups of an order-of-magnitude compared to the state-of-the-art. Third, as a proof of concept, we finally train GMMs with over 10 billion parameters on about 100 million images, observing training times of less than nine hours on a single state-of-the-art CPU. Finally, and forth, we demonstrate the effectiveness of large-scale GMMs on the task of zero-shot image denoising, where sublinear training results in state-of-the-art denoising times while competitive denoising performance is maintained. + oai:arXiv.org:2501.12299v2 stat.ML + cs.CV cs.LG - eess.SP - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhidi Lin, Ying Li, Feng Yin, Juan Maro\~nas, Alexandre H. Thi\'ery + Sebastian Salwig, Till Kahlke, Florian Hirschberger, Dennis Forster, J\"org L\"ucke - Exact identifiability analysis for a class of partially observed near-linear stochastic differential equation models - https://arxiv.org/abs/2503.19241 - arXiv:2503.19241v3 Announce Type: replace -Abstract: Stochasticity plays a key role in many biological systems, necessitating the calibration of stochastic mathematical models to interpret associated data. For model parameters to be estimated reliably, it is typically the case that they must be structurally identifiable. Yet, while theory underlying structural identifiability analysis for deterministic differential equation models is highly developed, there are currently no tools for the general assessment of stochastic models. In this work, we present a differential algebra-based framework for the structural identifiability analysis of linear and a class of near-linear partially observed stochastic differential equation (SDE) models. Our framework is based on a deterministic recurrence relation that describes the dynamics of the statistical moments of the system of SDEs. From this relation, we iteratively form a series of necessarily satisfied equations involving only the observed moments, from which we are able to establish structurally identifiable parameter combinations. We demonstrate our framework for a suite of linear (two- and $n$-dimensional) and non-linear (two-dimensional) models. Most importantly, we define the notion of structural identifiability for SDE models and establish the effect of the initial condition on identifiability. We conclude with a discussion on the applicability and limitations of our approach, and potential future research directions in this understudied area. - oai:arXiv.org:2503.19241v3 + Bayesian Matrix Factor Models for Demographic Analysis Across Age and Time + https://arxiv.org/abs/2502.09255 + arXiv:2502.09255v2 Announce Type: replace +Abstract: Analyzing demographic data collected across multiple populations, time periods, and age groups is challenging due to the interplay of high dimensionality, demographic heterogeneity among groups, and stochastic variability within smaller groups. This paper proposes a Bayesian matrix factor model to address these challenges. By factorizing count data matrices as the product of low-dimensional latent age and time factors, the model achieves a parsimonious representation that mitigates overfitting and remains computationally feasible even when hundreds of populations are involved. Informative priors enforce smoothness in the age factors and allow for the dynamic evolution of the time factors. A straightforward Markov chain Monte Carlo algorithm is developed for posterior inference. Applying the model to Austrian district-level migration data from 2002 to 2023 demonstrates its ability to accurately reconstruct complex demographic processes using only a fraction of the parameters required by conventional demographic factor models. A forecasting exercise shows that the proposed model consistently outperforms standard benchmarks. Beyond statistical demography, the framework holds promise for a wide range of applications involving noisy, heterogeneous, and high-dimensional non-Gaussian matrix-valued data. + oai:arXiv.org:2502.09255v2 + stat.AP stat.ME - q-bio.QM - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Alexander P Browning, Michael J Chappell, Hamid Rahkooy, Torkel E Loman, Ruth E Baker + Gregor Zens - A Restricted Latent Class Hidden Markov Model for Polytomous Responses, Polytomous Attributes, and Covariates: Identifiability and Application - https://arxiv.org/abs/2503.20940 - arXiv:2503.20940v4 Announce Type: replace -Abstract: We introduce a restricted latent class exploratory model for longitudinal data with ordinal attributes and respondent-specific covariates. Responses follow a time inhomogeneous hidden Markov model where the probability of a particular latent state at a time point is conditional on values at the previous time point of the respondent's covariates and latent state. We prove that the model is identifiable, state a Bayesian formulation, and demonstrate its efficacy in a variety of scenarios through two simulation studies. We apply the model to response data from a mathematics examination, comparing the results to a previously published confirmatory analysis, and also apply it to emotional state response data which was measured over a several-day period. - oai:arXiv.org:2503.20940v4 + Exponential dimensional dependence in high-dimensional Hermite method of moments + https://arxiv.org/abs/2502.17431 + arXiv:2502.17431v2 Announce Type: replace +Abstract: It is numerically well known that moment-based tests for Gaussianity and estimators become increasingly unreliable at higher moment orders; however, this phenomenon has lacked rigorous mathematical justification. In this work, we establish quantitative bounds for Hermite-based moment tests, with matching exponential upper and lower bounds. Our results show that, even under ideal conditions with i.i.d. standard normal data, the sample size must grow exponentially with the highest moment order $d$ used in the test. These bounds, derived under both the convex distance and the Kolmogorov-Smirnov distance, are applied to classical procedures, such as the Shenton-Bowman test. + oai:arXiv.org:2502.17431v2 + math.ST + math.PR + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Andreas Basse-O'Connor, David Kramer-Bang + + + Beyond Basic A/B testing: Improving Statistical Efficiency for Business Growth + https://arxiv.org/abs/2505.08128 + arXiv:2505.08128v2 Announce Type: replace +Abstract: The standard A/B testing approaches are mostly based on t-test in large scale industry applications. These standard approaches however suffers from low statistical power in business settings, due to nature of small sample-size or non-Gaussian distribution or return-on-investment (ROI) consideration. In this paper, we (i) show the statistical efficiency of using estimating equation and U statistics, which can address these issues separately; and (ii) propose a novel doubly robust generalized U that allows flexible definition of treatment effect, and can handles small samples, distribution robustness, ROI and confounding consideration in one framework. We provide theoretical results on asymptotics and efficiency bounds, together with insights on the efficiency gain from theoretical analysis. We further conduct comprehensive simulation studies, apply the methods to multiple real A/B tests at a large SaaS company, and share results and learnings that are broadly useful. + oai:arXiv.org:2505.08128v2 stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + cs.LG + math.ST + stat.CO + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Eric Alan Wayman, Steven Andrew Culpepper, Jeff Douglas, Jesse Bowers + Changshuai Wei, Phuc Nguyen, Benjamin Zelditch, Joyce Chen - Evaluation of clinical utility in emulated clinical trials - https://arxiv.org/abs/2506.03991 - arXiv:2506.03991v2 Announce Type: replace -Abstract: Dynamic treatment regimes have been proposed to personalize treatment decisions by utilizing historical patient data, but they may not always improve on the current standard of care. It is thus meaningful to integrate the standard of care into the evaluation of treatment strategies, and previous works have suggested doing so through the concept of clinical utility. Here we will focus on the comparative component of clinical utility as the average outcome had the full population received treatment based on the proposed dynamic treatment regime in comparison to the full population receiving the ``standard" treatment assignment mechanism, such as a physician's choice. Clinical trials to evaluate clinical utility are rarely conducted, and thus, previous works have proposed an emulated clinical trial framework using observational data. However, only one simple estimator was previously suggested, and the practical details of how one would conduct this emulated trial were not detailed. Here, we illuminate these details and propose several estimators of clinical utility based on estimators proposed in the dynamic treatment regime literature. We illustrate the considerations and the estimators in a real data example investigating treatment rules for rheumatoid arthritis, where we highlight that in addition to the standard of care, the current medical guidelines should also be compared to any estimated ``optimal'' decision rule. - oai:arXiv.org:2506.03991v2 + Optimal designs for identifying effective doses in drug combination studies + https://arxiv.org/abs/2506.05913 + arXiv:2506.05913v2 Announce Type: replace +Abstract: We consider the optimal design problem for identifying effective dose combinations within drug combination studies where the effect of the combination of two drugs is investigated. Drug combination studies are becoming increasingly important as they investigate potential interaction effects rather than the individual impacts of the drugs. In this situation, identifying effective dose combinations that yield a prespecified effect is of special interest. If nonlinear surface models are used to describe the dose combination-response relationship, these effective dose combinations result in specific contour lines of the fitted response model. + We propose a novel design criterion that targets the precise estimation of these effective dose combinations. In particular, an optimal design minimizes the width of the confidence band of the contour lines of interest. Optimal design theory is developed for this problem, including equivalence theorems and efficiency bounds. The performance of the optimal design is illustrated in different examples modeling dose combination data by various nonlinear surface models. It is demonstrated that the proposed optimal design for identifying effective dose combinations yields a more precise estimation of the effective dose combinations than ray or factorial designs, which are commonly used in practice. This particularly holds true for a case study motivated by data from an oncological dose combination study. + oai:arXiv.org:2506.05913v2 + stat.ME stat.AP - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Johannes Hruza, Arvid Sj\"olander, Erin Gabriel, Samir Bhatt, Michael Sachs + Leonie Sch\"urmeyer, Ludger Sandig, Jorrit K\"uhne, Leonie Theresa Hezler, Bernd-Wolfgang Igl, Kirsten Schorning - Diffusion Secant Alignment for Score-Based Density Ratio Estimation - https://arxiv.org/abs/2509.04852 - arXiv:2509.04852v3 Announce Type: replace -Abstract: Estimating density ratios has become increasingly important with the recent rise of score-based and diffusion-inspired methods. However, current tangent-based approaches rely on a high-variance learning objective, which leads to unstable training and costly numerical integration during inference. We propose \textit{Interval-annealed Secant Alignment Density Ratio Estimation (ISA-DRE)}, a score-based framework along diffusion interpolants that replaces the instantaneous tangent with its interval integral, the secant, as the learning target. We show theoretically that the secant is a provably lower variance and smoother target for neural approximation, and also a strictly more general representation that contains the tangent as the infinitesimal limit. To make secant learning feasible, we introduce the \textit{Secant Alignment Identity (SAI)} to enforce self consistency between secant and tangent representations, and \textit{Contraction Interval Annealing (CIA)} to ensure stable convergence. Empirically, this stability-first formulation produces high efficiency and accuracy. ISA-DRE achieves comparable or superior results with fewer function evaluations, demonstrating robustness under large distribution discrepancies and effectively mitigating the density-chasm problem. - oai:arXiv.org:2509.04852v3 - stat.ML - cs.LG - Thu, 11 Dec 2025 00:00:00 -0500 + Bias in estimating Theil, Atkinson, and dispersion indices for gamma mixture populations + https://arxiv.org/abs/2506.22168 + arXiv:2506.22168v2 Announce Type: replace +Abstract: This paper examines the finite-sample bias of estimators for the Theil and Atkinson indices, as well as for the variance-to-mean ratio (VMR), under the assumption that the population follows a finite mixture of gamma distributions with a common rate parameter. Using Mosimann's proportion-sum independence theorem and the structural relationship between the gamma and Dirichlet distributions, these estimators were rewritten as functions of Dirichlet vectors, which enabled the derivation of closed-form analytical expressions for their expected values. A Monte Carlo simulation study evaluates the performance of both the traditional and bias-corrected estimators across a range of mixture scenarios and sample sizes, revealing systematic bias induced by population heterogeneity and demonstrating the effectiveness of the proposed corrections, particularly in small and moderate samples. An empirical application to global per capita GDP data further illustrates the practical relevance of the methodology and confirms the suitability of gamma mixtures for representing structural economic heterogeneity. + oai:arXiv.org:2506.22168v2 + stat.ME + Fri, 12 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Wei Chen, Shigui Li, Jiacheng Li, Jian Xu, Zhiqi Lin, Junmei Yang, Delu Zeng, John Paisley, Qibin Zhao + Jackson Assis, Roberto Vila, Helton Saulo - Next-Generation Reservoir Computing for Dynamical Inference - https://arxiv.org/abs/2509.11338 - arXiv:2509.11338v2 Announce Type: replace -Abstract: We present a simple and scalable implementation of next-generation reservoir computing (NGRC) for modeling dynamical systems from time-series data. The method uses a pseudorandom nonlinear projection of time-delay embedded inputs, allowing the feature-space dimension to be chosen independently of the observation size and offering a flexible alternative to polynomial-based NGRC projections. We demonstrate the approach on benchmark tasks, including attractor reconstruction and bifurcation diagram estimation, using partial and noisy measurements. We further show that small amounts of measurement noise during training act as an effective regularizer, improving long-term autonomous stability compared to standard regression alone. Across all tests, the models remain stable over long rollouts and generalize beyond the training data. The framework offers explicit control of system state during prediction, and these properties make NGRC a natural candidate for applications such as surrogate modeling and digital-twin applications. - oai:arXiv.org:2509.11338v2 + Trustworthy scientific inference with generative models + https://arxiv.org/abs/2508.02602 + arXiv:2508.02602v2 Announce Type: replace +Abstract: Generative artificial intelligence (AI) excels at producing complex data structures (text, images, videos) by learning patterns from training examples. Across scientific disciplines, researchers are now applying generative models to "inverse problems" to directly predict hidden parameters from observed data along with measures of uncertainty. While these predictive or posterior-based methods can handle intractable likelihoods and large-scale studies, they can also produce biased or overconfident conclusions even without model misspecifications. We present a solution with Frequentist-Bayes (FreB), a mathematically rigorous protocol that reshapes AI-generated posterior probability distributions into (locally valid) confidence regions that consistently include true parameters with the expected probability, while achieving minimum size when training and target data align. We demonstrate FreB's effectiveness by tackling diverse case studies in the physical sciences: identifying unknown sources under dataset shift, reconciling competing theoretical models, and mitigating selection bias and systematics in observational studies. By providing validity guarantees with interpretable diagnostics, FreB enables trustworthy scientific inference across fields where direct likelihood evaluation remains impossible or prohibitively expensive. + oai:arXiv.org:2508.02602v2 stat.ML + astro-ph.IM cs.LG - Thu, 11 Dec 2025 00:00:00 -0500 + stat.AP + stat.ME + Fri, 12 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Rok Cestnik, Erik A. Martens + James Carzon, Luca Masserano, Joshua D. Ingram, Alex Shen, Antonio Carlos Herling Ribeiro Junior, Tommaso Dorigo, Michele Doro, Joshua S. Speagle, Rafael Izbicki, Ann B. Lee - Generalized Guarantees for Variational Inference in the Presence of Even and Elliptical Symmetry - https://arxiv.org/abs/2511.01064 - arXiv:2511.01064v2 Announce Type: replace -Abstract: We extend several recent results providing symmetry-based guarantees for variational inference (VI) with location-scale families. VI approximates a target density $p$ by the best match $q^*$ in a family $Q$ of tractable distributions that in general does not contain $p$. It is known that VI can recover key properties of $p$, such as its mean and correlation matrix, when $p$ and $Q$ exhibit certain symmetries and $q^*$ is found by minimizing the reverse Kullback-Leibler divergence. We extend these guarantees in two important directions. First, we provide symmetry-based guarantees for $f$-divergences, a broad class that includes the reverse and forward Kullback-Leibler divergences and the $\alpha$-divergences. We highlight properties specific to the reverse Kullback-Leibler divergence under which we obtain our strongest guarantees. Second, we obtain further guarantees for VI when the target density $p$ exhibits even and elliptical symmetries in some but not all of its coordinates. These partial symmetries arise naturally in Bayesian hierarchical models, where the prior induces a challenging geometry but still possesses axes of symmetry. We illustrate these theoretical results in a number of experimental settings. - oai:arXiv.org:2511.01064v2 + Distributional Shrinkage I: Universal Denoisers in Multi-Dimensions + https://arxiv.org/abs/2511.09500 + arXiv:2511.09500v2 Announce Type: replace +Abstract: We revisit the problem of denoising from noisy measurements where only the noise level is known, not the noise distribution. In multi-dimensions, independent noise $Z$ corrupts the signal $X$, resulting in the noisy measurement $Y = X + \sigma Z$, where $\sigma \in (0, 1)$ is a known noise level. Our goal is to recover the underlying signal distribution $P_X$ from denoising $P_Y$. We propose and analyze universal denoisers that are agnostic to a wide range of signal and noise distributions. Our distributional denoisers offer order-of-magnitude improvements over the Bayes-optimal denoiser derived from Tweedie's formula, if the focus is on the entire distribution $P_X$ rather than on individual realizations of $X$. Our denoisers shrink $P_Y$ toward $P_X$ optimally, achieving $O(\sigma^4)$ and $O(\sigma^6)$ accuracy in matching generalized moments and density functions. Inspired by optimal transport theory, the proposed denoisers are optimal in approximating the Monge-Amp\`ere equation with higher-order accuracy, and can be implemented efficiently via score matching. + Let $q$ represent the density of $P_Y$; for optimal distributional denoising, we recommend replacing the Bayes-optimal denoiser, \[ \mathbf{T}^*(y) = y + \sigma^2 \nabla \log q(y), \] with denoisers exhibiting less aggressive distributional shrinkage, \[ \mathbf{T}_1(y) = y + \frac{\sigma^2}{2} \nabla \log q(y), \] \[ \mathbf{T}_2(y) = y + \frac{\sigma^2}{2} \nabla \log q(y) - \frac{\sigma^4}{8} \nabla \left( \frac{1}{2} \| \nabla \log q(y) \|^2 + \nabla \cdot \nabla \log q(y) \right) . \] + oai:arXiv.org:2511.09500v2 stat.ML cs.LG - stat.CO - Thu, 11 Dec 2025 00:00:00 -0500 + math.ST + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Charles C. Margossian, Lawrence K. Saul + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Tengyuan Liang - Statistical Properties of Rectified Flow - https://arxiv.org/abs/2511.03193 - arXiv:2511.03193v3 Announce Type: replace -Abstract: Rectified flow (Liu et al., 2022; Liu, 2022; Wu et al., 2023) is a method for defining a transport map between two distributions, and enjoys popularity in machine learning, although theoretical results supporting the validity of these methods are scant. The rectified flow can be regarded as an approximation to optimal transport, but in contrast to other transport methods that require optimization over a function space, computing the rectified flow only requires standard statistical tools such as regression or density estimation, which we leverage to develop empirical versions of transport maps. We study some structural properties of the rectified flow, including existence, uniqueness, and regularity, as well as the related statistical properties, such as rates of convergence and central limit theorems, for some selected estimators. To do so, we analyze the bounded and unbounded cases separately as each presents unique challenges. In both cases, we are able to establish convergence at faster rates than those for the usual nonparametric regression and density estimation. - oai:arXiv.org:2511.03193v3 - math.ST - cs.LG + Scalable and Efficient Multiple Imputation for Case-Cohort Studies via Influence Function-Based Supersampling + https://arxiv.org/abs/2511.14692 + arXiv:2511.14692v2 Announce Type: replace +Abstract: Two-phase sampling designs have been widely adopted in epidemiological studies to reduce costs when measuring certain biomarkers is prohibitively expensive. Under these designs, investigators commonly relate survival outcomes to risk factors using the Cox proportional hazards model. To fully utilize covariates collected in phase 1, multiple imputation (MI) methods have been developed to impute missing covariates for individuals not included in the phase 2 sample. However, MI becomes computationally intensive in large-scale cohorts, particularly when rejection sampling is employed to mitigate bias arising from nonlinear or interaction terms in the analysis model. To address this issue, Borgan et al. (2023) proposed a random supersampling (RSS) approach that randomly selects a subset of cohort members for imputation, albeit at the cost of reduced efficiency. In this study, we propose an influence function-based supersampling (ISS) method with weight calibration. The method achieves efficiency comparable to imputing the entire cohort, even with a small supersample, while substantially reducing computational burden. We further demonstrate that the proposed method is especially advantageous when estimating hazard ratios for high-dimensional expensive biomarkers. Extensive simulation studies are conducted, and a real data application is provided using the National Institutes of Health-American Association of Retired Persons (NIH-AARP) Diet and Health Study. + oai:arXiv.org:2511.14692v2 stat.ME - stat.ML - stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Gonzalo Mena, Arun Kumar Kuchibhotla, Larry Wasserman + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jooho Kim, Yei Eun Shin - Function-on-Function Bayesian Optimization - https://arxiv.org/abs/2511.12783 - arXiv:2511.12783v2 Announce Type: replace -Abstract: Bayesian optimization (BO) has been widely used to optimize expensive and gradient-free objective functions across various domains. However, existing BO methods have not addressed the objective where both inputs and outputs are functions, which increasingly arise in complex systems as advanced sensing technologies. To fill this gap, we propose a novel function-on-function Bayesian optimization (FFBO) framework. Specifically, we first introduce a function-on-function Gaussian process (FFGP) model with a separable operator-valued kernel to capture the correlations between function-valued inputs and outputs. Compared to existing Gaussian process models, FFGP is modeled directly in the function space. Based on FFGP, we define a scalar upper confidence bound (UCB) acquisition function using a weighted operator-based scalarization strategy. Then, a scalable functional gradient ascent algorithm (FGA) is developed to efficiently identify the optimal function-valued input. We further analyze the theoretical properties of the proposed method. Extensive experiments on synthetic and real-world data demonstrate the superior performance of FFBO over existing approaches. - oai:arXiv.org:2511.12783v2 - stat.ML - cs.LG - Thu, 11 Dec 2025 00:00:00 -0500 + Location--Scale Calibration for Generalized Posterior + https://arxiv.org/abs/2511.15320 + arXiv:2511.15320v3 Announce Type: replace +Abstract: General Bayesian updating replaces the likelihood with a loss scaled by a learning rate, but posterior uncertainty can depend sharply on that scale. We propose a simple post-processing that aligns generalized posterior draws with their asymptotic target, yielding uncertainty quantification that is invariant to the learning rate. We prove total-variation convergence for generalized posteriors with an effective sample size, allowing sample-size-dependent priors, non-i.i.d. observations, and convex penalties under model misspecification. Within this framework, we justify and extend the open-faced sandwich adjustment (Shaby, 2014), provide general theoretical guarantees for its use within generalized Bayes, and extend it from covariance rescaling to a location--scale calibration whose draws converge in total variation to the target for any learning rate. In our empirical illustration, calibrated draws maintain stable coverage, interval width, and bias over orders of magnitude in the learning rate and closely track frequentist benchmarks, whereas uncalibrated posteriors vary markedly. + oai:arXiv.org:2511.15320v3 + stat.ME + Fri, 12 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jingru Huang, Haijie Xu, Manrui Jiang, Chen Zhang + Shu Tamano, Yui Tomo - Solving a Research Problem in Mathematical Statistics with AI Assistance - https://arxiv.org/abs/2511.18828 - arXiv:2511.18828v2 Announce Type: replace -Abstract: Over the last few months, AI models including large language models have improved greatly. There are now several documented examples where they have helped professional mathematical scientists prove new results, sometimes even helping resolve known open problems. In this short note, we add another example to the list, by documenting how we were able to solve a previously unsolved research problem in robust mathematical statistics with crucial help from GPT-5. Our problem concerns robust density estimation, where the observations are perturbed by Wasserstein-bounded contaminations. In a previous preprint (Chao and Dobriban, 2023, arxiv:2308.01853v2), we have obtained upper and lower bounds on the minimax optimal estimation error; which were, however, not sharp. - Starting in October 2025, making significant use of GPT-5 Pro, we were able to derive the minimax optimal error rate (reported in version 3 of the above arxiv preprint). GPT-5 provided crucial help along the way, including by suggesting calculations that we did not think of, and techniques that were not familiar to us, such as the dynamic Benamou-Brenier formulation, for key steps in the analysis. Working with GPT-5 took a few weeks of effort, and we estimate that it could have taken several months to get the same results otherwise. At the same time, there are still areas where working with GPT-5 was challenging: it sometimes provided incorrect references, and glossed over details that sometimes took days of work to fill in. We outline our workflow and steps taken to mitigate issues. Overall, our work can serve as additional documentation for a new age of human-AI collaborative work in mathematical science. - oai:arXiv.org:2511.18828v2 - math.ST - cs.AI - cs.LG - stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + Univariate-Guided Sparse Regression for Biobank-Scale High-Dimensional Omics Data + https://arxiv.org/abs/2511.22049 + arXiv:2511.22049v3 Announce Type: replace +Abstract: We present a scalable framework for computing polygenic risk scores (PRS) in high-dimensional genomic settings using the recently introduced Univariate-Guided Sparse Regression (uniLasso). UniLasso is a two-stage penalized regression procedure that leverages univariate coefficients and magnitudes to stabilize feature selection and enhance interpretability. Building on its theoretical and empirical advantages, we adapt uniLasso for application to the UK Biobank, a population-based repository comprising over one million genetic variants measured on hundreds of thousands of individuals from the United Kingdom. We further extend the framework to incorporate external summary statistics to increase predictive accuracy. Our results demonstrate that uniLasso attains predictive performance comparable to standard Lasso while selecting substantially fewer variants, yielding sparser and more interpretable models. Moreover, it exhibits superior performance in estimating PRS relative to its competitors, such as PRS-CS. Integrating external scores further improves prediction while maintaining sparsity. + oai:arXiv.org:2511.22049v3 + stat.ME + Fri, 12 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Edgar Dobriban + Joshua Richland, Tuomo Kiiskinen, William Wang, Sophia Lu, Balasubramanian Narasimhan, Manuel Rivas, Robert Tibshirani - Two-stage Estimation for Causal Inference Involving a Semi-continuous Exposure - https://arxiv.org/abs/2511.20985 - arXiv:2511.20985v3 Announce Type: replace -Abstract: Methods for causal inference are well developed for binary and continuous exposures, but in many settings, the exposure has a substantial mass at zero-such exposures are called semi-continuous. We propose a general causal framework for such semi-continuous exposures, together with a novel two-stage estimation strategy. A two-part propensity structure is introduced for the semi-continuous exposure, with one component for exposure status (exposed vs unexposed) and another for the exposure level among those exposed, and incorporates both into a marginal structural model that disentangles the effects of exposure status and dose. The two-stage procedure sequentially targets the causal dose-response among exposed individuals and the causal effect of exposure status at a reference dose, allowing flexibility in the choice of propensity score methods in the second stage. We establish consistency and asymptotic normality for the resulting estimators, and characterise their limiting values under misspecification of the propensity score models. Simulation studies evaluate finite sample performance and robustness, and an application to a study of prenatal alcohol exposure and child cognition demonstrates how the proposed methods can be used to address a range of scientific questions about both exposure status and exposure intensity. - oai:arXiv.org:2511.20985v3 + Latency-Response Theory Model: Evaluating Large Language Models via Response Accuracy and Chain-of-Thought Length + https://arxiv.org/abs/2512.07019 + arXiv:2512.07019v2 Announce Type: replace +Abstract: The proliferation of Large Language Models (LLMs) necessitates valid evaluation methods to guide downstream applications and actionable future improvements. The Item Response Theory (IRT) has recently emerged as a promising framework for evaluating LLMs via their response accuracy. Beyond simple response accuracy, LLMs' chain of thought (CoT) lengths serve as a vital indicator of their reasoning ability. To leverage the CoT length information to assist the evaluation of LLMs, we propose Latency-Response Theory (LaRT) to jointly model the response accuracy and CoT length by introducing the latent ability, latent speed, and a key correlation parameter between them. We derive an efficient estimation algorithm and establish rigorous identifiability results for the population parameters to ensure the statistical validity of estimation. Theoretical asymptotic analyses and simulation studies demonstrate LaRT's advantages over IRT in terms of higher estimation accuracy and shorter confidence intervals for latent traits. A key finding is that the asymptotic estimation precision of the latent ability under LaRT exceeds that of IRT whenever the latent ability and latent speed are correlated. We collect real responses from diverse LLMs on popular benchmark datasets. The application of LaRT reveals a strong negative correlation between the latent ability and latent speed in all benchmarks, with stronger correlation for more difficult benchmarks. This finding supports the intuition that higher reasoning ability correlates with slower speed and longer response latency. LaRT yields different LLM rankings than IRT and outperforms IRT across multiple key evaluation metrics including predictive power, item efficiency, ranking validity, and LLM evaluation efficiency. Code and data are available at https://github.com/Toby-X/Latency-Response-Theory-Model. + oai:arXiv.org:2512.07019v2 stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + cs.AI + stat.AP + stat.ML + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Xiaoya Wang, Richard J. Cook, Yeying Zhu, Tugba Akkaya-Hocagil, R. Colin Carter, Sandra W. Jacobson, Joseph L. Jacobson, Louise M. Ryan + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhiyu Xu, Jia Liu, Yixin Wang, Yuqi Gu - Sequential Randomization Tests Using e-values: Applications for trial monitoring - https://arxiv.org/abs/2512.04366 - arXiv:2512.04366v3 Announce Type: replace -Abstract: Sequential monitoring of randomized trials traditionally relies on parametric assumptions or asymptotic approximations. We discuss a nonparametric sequential test and its application to continuous and time-to-event endpoints that derives validity solely from the randomization mechanism. Using a betting framework, these tests constructs a test martingale by sequentially wagering on treatment assignments given observed outcomes. Under the null hypothesis of no treatment effect, the expected wealth cannot grow, guaranteeing anytime-valid Type I error control regardless of stopping rule. We prove validity and present simulation studies demonstrating calibration and power. These methods provide a conservative, assumption-free complement to model-based sequential analyses. - oai:arXiv.org:2512.04366v3 - stat.ME + Group Cooperation Diverges onto Durable Low versus High Paths: Public Goods Experiments in 134 Honduran Villages + https://arxiv.org/abs/2512.09316 + arXiv:2512.09316v2 Announce Type: replace +Abstract: We performed large, lab-in-the-field experiment (2,591 participants across 134 Honduran villages; ten rounds) and tracked how contribution behavior unfolds in fixed, anonymous groups of size five. Contribution separates early into two durable paths, one low and one high, with rare convergence thereafter. High-path players can be identified with strong accuracy early on. Groups that begin with an early majority of above-norm contributors (about 60%) are very likely finish high. The empirical finding of a bifurcation, consistent with the theory, shows that early, high contributions by socially central people steer groups onto, and help keep them on, a high-cooperation path. + oai:arXiv.org:2512.09316v2 stat.AP - Thu, 11 Dec 2025 00:00:00 -0500 + stat.OT + Fri, 12 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Fernando G Zampieri + Marios Papamichalis, Nicholas Christakis, Feng Fu - ADOPT: Additive Optimal Transport Regression - https://arxiv.org/abs/2512.08118 - arXiv:2512.08118v2 Announce Type: replace -Abstract: Regression analysis for responses taking values in general metric spaces has received increasing attention, particularly for settings with Euclidean predictors $X \in \mathbb{R}^p$ and non-Euclidean responses $Y$ in metric spaces. While additive regression is a powerful tool for enhancing interpretability and mitigating the curse of dimensionality in the presence of multivariate predictors, its direct extension is hindered by the absence of vector space operations in general metric spaces. We propose a novel framework for additive optimal transport regression, which incorporates additive structure through optimal geodesic transports. A key idea is to extend the notion of optimal transports in Wasserstein spaces to general geodesic metric spaces. This unified approach accommodates a wide range of responses, including probability distributions, symmetric positive definite (SPD) matrices with various metrics and spherical data. The practical utility of the method is illustrated with correlation matrices derived from resting state fMRI brain imaging data. - oai:arXiv.org:2512.08118v2 - stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + A simple geometric proof for the characterisation of e-merging functions + https://arxiv.org/abs/2512.09708 + arXiv:2512.09708v2 Announce Type: replace +Abstract: E-values offer a powerful framework for aggregating evidence across different (possibly dependent) statistical experiments. A fundamental question is to identify e-merging functions, namely mappings that merge several e-values into a single valid e-value. A simple and elegant characterisation of this function class was recently obtained by Wang(2025), though via technically involved arguments. This note gives a short and intuitive geometric proof of the same characterisation, based on a supporting hyperplane argument applied to concave envelopes. We also show that the result holds even without imposing monotonicity in the definition of e-merging functions, which was needed for the existing proof. This shows that any non-monotone merging rule is automatically dominated by a monotone one, and hence extending the definition beyond the monotone case brings no additional generality. + oai:arXiv.org:2512.09708v2 + math.ST + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wookyeong Song, Hans-Georg M\"uller + http://creativecommons.org/licenses/by/4.0/ + Eugenio Clerico - Information-Theoretic Active Correlation Clustering - https://arxiv.org/abs/2402.03587 - arXiv:2402.03587v3 Announce Type: replace-cross -Abstract: Correlation clustering is a flexible framework for partitioning data based solely on pairwise similarity or dissimilarity information, without requiring the number of clusters as input. However, in many practical scenarios, these pairwise similarities are not available a priori and must be obtained through costly measurements or human feedback. This motivates the use of active learning to query only the most informative pairwise comparisons, enabling effective clustering under budget constraints. In this work, we develop a principled active learning approach for correlation clustering by introducing several information-theoretic acquisition functions that prioritize queries based on entropy and expected information gain. These strategies aim to reduce uncertainty about the clustering structure as efficiently as possible. We evaluate our methods across a range of synthetic and real-world settings and show that they significantly outperform existing baselines in terms of clustering accuracy and query efficiency. Our results highlight the benefits of combining active learning with correlation clustering in settings where similarity information is costly or limited. - oai:arXiv.org:2402.03587v3 - cs.LG - stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - IEEE International Conference on Data Mining (ICDM), 2025 - Linus Aronsson, Morteza Haghir Chehreghani + Bayesian Model Selection with an Application to Cosmology + https://arxiv.org/abs/2512.09724 + arXiv:2512.09724v2 Announce Type: replace +Abstract: We investigate cosmological parameter inference and model selection from a Bayesian perspective. Type Ia supernova data from the Dark Energy Survey (DES-SN5YR) are used to test the $\Lambda$CDM, $w$CDM, and CPL cosmological models. Posterior inference is performed via Hamiltonian Monte Carlo using the No-U-Turn Sampler (NUTS) implemented in NumPyro and analyzed with ArviZ in Python. Bayesian model comparison is conducted through Bayes factors computed using the bridgesampling library in R. The results indicate that all three models demonstrate similar predictive performance, but $w$CDM shows stronger evidence relative to $\Lambda$CDM and CPL. We conclude that, under the assumptions and data used in this study, $w$CDM provides a better description of cosmological expansion. + oai:arXiv.org:2512.09724v2 + stat.AP + astro-ph.CO + stat.ME + Fri, 12 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Nikoloz Gigiberia - Kernel Three Pass Regression Filter - https://arxiv.org/abs/2405.07292 - arXiv:2405.07292v4 Announce Type: replace-cross -Abstract: We forecast a single time series using a high-dimensional set of predictors. When these predictors share common underlying dynamics, an approximate latent factor model provides a powerful characterization of their co-movements Bai(2003). These latent factors succinctly summarize the data and can also be used for prediction, alleviating the curse of dimensionality in high-dimensional prediction exercises, see Stock & Watson (2002a). However, forecasting using these latent factors suffers from two potential drawbacks. First, not all pervasive factors among the set of predictors may be relevant, and using all of them can lead to inefficient forecasts. The second shortcoming is the assumption of linear dependence of predictors on the underlying factors. The first issue can be addressed by using some form of supervision, which leads to the omission of irrelevant information. One example is the three-pass regression filter proposed by Kelly & Pruitt (2015). We extend their framework to cases where the form of dependence might be nonlinear by developing a new estimator, which we refer to as the Kernel Three-Pass Regression Filter (K3PRF). This alleviates the aforementioned second shortcoming. The estimator is computationally efficient and performs well empirically. The short-term performance matches or exceeds that of established models, while the long-term performance shows significant improvement. - oai:arXiv.org:2405.07292v4 - econ.EM - q-fin.ST - stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + Rank-based linkage I: triplet comparisons and oriented simplicial complexes + https://arxiv.org/abs/2302.02200 + arXiv:2302.02200v5 Announce Type: replace-cross +Abstract: Rank-based linkage is a new tool for summarizing a collection $S$ of objects according to their relationships. These objects are not mapped to vectors, and ``similarity'' between objects need be neither numerical nor symmetrical. All an object needs to do is rank nearby objects by similarity to itself, using a Comparator which is transitive, but need not be consistent with any metric on the whole set. Call this a ranking system on $S$. Rank-based linkage is applied to the $K$-nearest neighbor digraph derived from a ranking system. Computations occur on a 2-dimensional abstract oriented simplicial complex whose faces are among the points, edges, and triangles of the line graph of the undirected $K$-nearest neighbor graph on $S$. In $|S| K^2$ steps it builds an edge-weighted linkage graph $(S, \mathcal{L}, \sigma)$ where $\sigma(\{x, y\})$ is called the in-sway between objects $x$ and $y$. Take $\mathcal{L}_t$ to be the links whose in-sway is at least $t$, and partition $S$ into components of the graph $(S, \mathcal{L}_t)$, for varying $t$. Rank-based linkage is a functor from a category of ``out-ordered'' digraphs to a category of partitioned sets, with the practical consequence that augmenting the set of objects in a rank-respectful way gives a fresh clustering which does not ``rip apart'' the previous one. The same holds for single linkage clustering in the metric space context, but not for typical optimization-based methods. Orientation sheaves play in a fundamental role and ensure that partially overlapping data sets can be ``glued'' together. Open combinatorial problems are presented in the last section. + oai:arXiv.org:2302.02200v5 + math.CO + math.ST + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Rajveer Jat, Daanish Padha + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + R. W. R. Darling, Will Grilliette, Adam Logan - Spectral Analysis of Diffusion Models with Application to Schedule Design - https://arxiv.org/abs/2502.00180 - arXiv:2502.00180v3 Announce Type: replace-cross -Abstract: Diffusion models (DMs) have emerged as powerful tools for modeling complex data distributions and generating realistic new samples. Over the years, advanced architectures and sampling methods have been developed to make these models practically usable. However, certain synthesis process decisions still rely on heuristics without a solid theoretical foundation. In our work, we offer a novel analysis of the DM's inference process, introducing a comprehensive frequency response perspective. Specifically, by relying on Gaussianity assumption, we present the inference process as a closed-form spectral transfer function, capturing how the generated signal evolves in response to the initial noise. We demonstrate how the proposed analysis can be leveraged to design a noise schedule that aligns effectively with the characteristics of the data. The spectral perspective also provides insights into the underlying dynamics and sheds light on the relationship between spectral properties and noise schedule structure. Our results lead to scheduling curves that are dependent on the spectral content of the data, offering a theoretical justification for some of the heuristics taken by practitioners. - oai:arXiv.org:2502.00180v3 + Internal Evaluation of Density-Based Clusterings with Noise + https://arxiv.org/abs/2503.00127 + arXiv:2503.00127v2 Announce Type: replace-cross +Abstract: Being able to evaluate the quality of a clustering result even in the absence of ground truth cluster labels is fundamental for research in data mining. However, most cluster validation indices (CVIs) do not capture noise assignments by density-based clustering methods like DBSCAN or HDBSCAN, even though the ability to correctly determine noise is crucial for successful clustering. In this paper, we propose DISCO, a Density-based Internal Score for Clusterings with nOise, the first CVI to explicitly assess the quality of noise assignments rather than merely counting them. DISCO is based on the established idea of the Silhouette Coefficient, but adopts density-connectivity to evaluate clusters of arbitrary shapes, and proposes explicit noise evaluation: it rewards correctly assigned noise labels and penalizes noise labels where a cluster label would have been more appropriate. The pointwise definition of DISCO allows for the seamless integration of noise evaluation into the final clustering evaluation, while also enabling explainable evaluations of the clustered data. In contrast to most state-of-the-art, DISCO is well-defined and also covers edge cases that regularly appear as output from clustering algorithms, such as singleton clusters or a single cluster plus noise. + oai:arXiv.org:2503.00127v2 cs.LG stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by-sa/4.0/ - Roi Benita, Michael Elad, Joseph Keshet + http://creativecommons.org/licenses/by/4.0/ + Anna Beer, Lena Krieger, Pascal Weber, Martin Ritzert, Ira Assent, Claudia Plant - Automatic Inference for Value-Added Regressions - https://arxiv.org/abs/2503.19178 - arXiv:2503.19178v2 Announce Type: replace-cross -Abstract: A large empirical literature regresses outcomes on empirical Bayes shrinkage estimates of value-added, yet little is known about whether this approach leads to unbiased estimates and valid inference for the downstream regression coefficients. We study a general class of empirical Bayes estimators and the properties of the resulting regression coefficients. We show that estimators can be asymptotically biased and inference can be invalid if the shrinkage estimator does not account for heteroskedasticity in the noise when estimating value added. By contrast, shrinkage estimators properly constructed to model this heteroskedasticity perform an automatic bias correction: the associated regression estimator is asymptotically unbiased, asymptotically normal, and efficient in the sense that it is asymptotically equivalent to regressing on the true (latent) value-added. Further, OLS standard errors from regressing on shrinkage estimates are consistent in this case. As such, efficient inference is easy for practitioners to implement: simply regress outcomes on shrinkage estimates of value-added that account for noise heteroskedasticity. - oai:arXiv.org:2503.19178v2 - econ.EM - stat.ME - Thu, 11 Dec 2025 00:00:00 -0500 + On the Intersection and Composition properties of conditional independence + https://arxiv.org/abs/2504.11978 + arXiv:2504.11978v2 Announce Type: replace-cross +Abstract: Compositional graphoids are fundamental discrete structures which appear in probabilistic reasoning, particularly in the area of graphical models. They are semigraphoids which satisfy the Intersection and Composition properties. These important properties, however, are not enjoyed by general probability distributions. This paper surveys what is known about them, providing systematic constructions of examples and counterexamples as well as necessary and sufficient conditions. Novel sufficient conditions for both properties are derived in the context of discrete random variables via information-theoretic tools. + oai:arXiv.org:2504.11978v2 + cs.IT + math.IT + math.ST + stat.TH + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tian Xie + Tobias Boege - Inference on effect size after multiple hypothesis testing - https://arxiv.org/abs/2503.22369 - arXiv:2503.22369v3 Announce Type: replace-cross -Abstract: Significant treatment effects are often emphasized when interpreting and summarizing empirical findings in studies that estimate multiple, possibly many, treatment effects. Under this kind of selective reporting, conventional treatment effect estimates may be biased and their corresponding confidence intervals may undercover the true effect sizes. We propose new estimators and confidence intervals that provide valid inferences on the effect sizes of the significant effects after multiple hypothesis testing. Our methods are based on the principle of selective conditional inference and complement a wide range of tests, including step-up tests and bootstrap-based step-down tests. Our approach is scalable, allowing us to study an application with over 370 estimated effects. We justify our procedure for asymptotically normal treatment effect estimators. We provide two empirical examples that demonstrate bias correction and confidence interval adjustments for significant effects. The magnitude and direction of the bias correction depend on the correlation structure of the estimated effects and whether the interpretation of the significant effects depends on the (in)significance of other effects. - oai:arXiv.org:2503.22369v3 - econ.EM - math.ST - stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + Potential Landscapes Reveal Spatiotemporal Structure in Urban Mobility: Hodge Decomposition and Principal Component Analysis of Tokyo Before and During COVID-19 + https://arxiv.org/abs/2505.20929 + arXiv:2505.20929v4 Announce Type: replace-cross +Abstract: Understanding human mobility is vital to solving societal challenges, such as epidemic control and urban transportation optimization. Recent advancements in data collection now enable the exploration of dynamic mobility patterns in human flow. However, the vast volume and complexity of mobility data make it difficult to interpret spatiotemporal patterns directly, necessitating effective information reduction. The core challenge is to balance data simplification with information preservation: methods must retain location-specific information about human flows from origins to destinations while reducing the data to a comprehensible level. This study proposes a two-step dimensionality reduction framework: First, combinatorial Hodge theory is applied to the given origin--destination (OD) matrices with timestamps to construct a set of potential landscapes of human flow, preserving imbalanced trip information between locations. Second, principal component analysis (PCA) expresses the time series of potential landscapes as a linear combination of a few static spatial components, with their coefficients representing temporal variations. The framework systematically decouples the spatial and temporal components of the given data. By implementing this two-step reduction method, we reveal large weight variations during a pandemic, characterized by an overall decline in mobility and stark contrasts between weekdays and holidays. These findings demonstrate the effectiveness of our framework in uncovering complex mobility patterns and its potential to inform urban planning and public health interventions. + oai:arXiv.org:2505.20929v4 + cs.SI + physics.soc-ph + stat.AP + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by-sa/4.0/ - Andreas Dzemski, Ryo Okui, Wenjie Wang + http://creativecommons.org/licenses/by/4.0/ + Yunhan Du, Takaaki Aoki, Naoya Fujiwara - Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners - https://arxiv.org/abs/2505.14042 - arXiv:2505.14042v2 Announce Type: replace-cross -Abstract: Adversarial training is one of the most effective adversarial defenses, but it incurs a high computational cost. In this study, we present the first theoretical analysis suggesting that adversarially pretrained transformers can serve as universally robust foundation models -- models that can robustly adapt to diverse downstream tasks with only lightweight tuning. Specifically, we demonstrate that single-layer linear transformers, after adversarial pretraining across a variety of classification tasks, can robustly generalize to unseen classification tasks through in-context learning from clean demonstrations (i.e., without requiring additional adversarial training or examples). This universal robustness stems from the model's ability to adaptively focus on robust features within given tasks. We also show the two open challenges for attaining robustness: accuracy--robustness trade-off and sample-hungry training. This study initiates the discussion on the utility of universally robust foundation models. While their training is expensive, the investment would prove worthwhile as downstream tasks can enjoy free adversarial robustness. The code is available at https://github.com/s-kumano/universally-robust-in-context-learner. - oai:arXiv.org:2505.14042v2 + Improved Regret Bounds for Gaussian Process Upper Confidence Bound in Bayesian Optimization + https://arxiv.org/abs/2506.01393 + arXiv:2506.01393v3 Announce Type: replace-cross +Abstract: This paper addresses the Bayesian optimization problem (also referred to as the Bayesian setting of the Gaussian process bandit), where the learner seeks to minimize the regret under a function drawn from a known Gaussian process (GP). Under a Mat\'ern kernel with a certain degree of smoothness, we show that the Gaussian process upper confidence bound (GP-UCB) algorithm achieves $\tilde{O}(\sqrt{T})$ cumulative regret with high probability. Furthermore, our analysis yields $O(\sqrt{T \ln^2 T})$ regret under a squared exponential kernel. These results fill the gap between the existing regret upper bound for GP-UCB and the best-known bound provided by Scarlett (2018). The key idea in our proof is to capture the concentration behavior of the input sequence realized by GP-UCB, enabling a more refined analysis of the GP's information gain. + oai:arXiv.org:2506.01393v3 cs.LG - cs.CV stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Soichiro Kumano, Hiroshi Kera, Toshihiko Yamasaki + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shogo Iwazaki - A Framework for Controllable Multi-objective Learning with Annealed Stein Variational Hypernetworks - https://arxiv.org/abs/2506.06715 - arXiv:2506.06715v3 Announce Type: replace-cross -Abstract: Pareto Set Learning (PSL) is popular as an efficient approach to obtaining the complete optimal solution in Multi-objective Learning (MOL). A set of optimal solutions approximates the Pareto set, and its mapping is a set of dense points in the Pareto front in objective space. However, some current methods face a challenge: how to make the Pareto solution is diverse while maximizing the hypervolume value. In this paper, we propose a novel method to address this challenge, which employs Stein Variational Gradient Descent (SVGD) to approximate the entire Pareto set. SVGD pushes a set of particles towards the Pareto set by applying a form of functional gradient descent, which helps to converge and diversify optimal solutions. Additionally, we employ diverse gradient direction strategies to thoroughly investigate a unified framework for SVGD in multi-objective optimization and adapt this framework with an annealing schedule to promote stability. We introduce our method, SVH-MOL, and validate its effectiveness through extensive experiments on multi-objective problems and multi-task learning, demonstrating its superior performance. - oai:arXiv.org:2506.06715v3 + Geometric Regularity in Deterministic Sampling Dynamics of Diffusion-based Generative Models + https://arxiv.org/abs/2506.10177 + arXiv:2506.10177v3 Announce Type: replace-cross +Abstract: Diffusion-based generative models employ stochastic differential equations (SDEs) and their equivalent probability flow ordinary differential equations (ODEs) to establish a smooth transformation between complex high-dimensional data distributions and tractable prior distributions. In this paper, we reveal a striking geometric regularity in the deterministic sampling dynamics of diffusion generative models: each simulated sampling trajectory along the gradient field lies within an extremely low-dimensional subspace, and all trajectories exhibit an almost identical boomerang shape, regardless of the model architecture, applied conditions, or generated content. We characterize several intriguing properties of these trajectories, particularly under closed-form solutions based on kernel-estimated data modeling. We also demonstrate a practical application of the discovered trajectory regularity by proposing a dynamic programming-based scheme to better align the sampling time schedule with the underlying trajectory structure. This simple strategy requires minimal modification to existing deterministic numerical solvers, incurs negligible computational overhead, and achieves superior image generation performance, especially in regions with only 5 - 10 function evaluations. + oai:arXiv.org:2506.10177v3 cs.LG + cond-mat.stat-mech + cs.CV stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Minh-Duc Nguyen, Dung D. Le + http://creativecommons.org/licenses/by/4.0/ + 10.1088/1742-5468/ae17ac + J. Stat. Mech. (2025) 124002 + Defang Chen, Zhenyu Zhou, Can Wang, Siwei Lyu - Efficient $Q$-Learning and Actor-Critic Methods for Robust Average Reward Reinforcement Learning - https://arxiv.org/abs/2506.07040 - arXiv:2506.07040v3 Announce Type: replace-cross -Abstract: We present a non-asymptotic convergence analysis of $Q$-learning and actor-critic algorithms for robust average-reward Markov Decision Processes (MDPs) under contamination, total-variation (TV) distance, and Wasserstein uncertainty sets. A key ingredient of our analysis is showing that the optimal robust $Q$ operator is a strict contraction with respect to a carefully designed semi-norm (with constant functions quotiented out). This property enables a stochastic approximation update that learns the optimal robust $Q$-function using $\tilde{\mathcal{O}}(\epsilon^{-2})$ samples. We also provide an efficient routine for robust $Q$-function estimation, which in turn facilitates robust critic estimation. Building on this, we introduce an actor-critic algorithm that learns an $\epsilon$-optimal robust policy within $\tilde{\mathcal{O}}(\epsilon^{-2})$ samples. We provide numerical simulations to evaluate the performance of our algorithms. - oai:arXiv.org:2506.07040v3 + Proof of a perfect platonic representation hypothesis + https://arxiv.org/abs/2507.01098 + arXiv:2507.01098v2 Announce Type: replace-cross +Abstract: In this note, we elaborate on and explain in detail the proof given by Ziyin et al. (2025) of the ``perfect" Platonic Representation Hypothesis (PRH) for the embedded deep linear network model (EDLN). We show that if trained with the stochastic gradient descent (SGD), two EDLNs with different widths and depths and trained on different data will become Perfectly Platonic, meaning that every possible pair of layers will learn the same representation up to a rotation. Because most of the global minima of the loss function are not Platonic, that SGD only finds the perfectly Platonic solution is rather extraordinary. The proof also suggests at least six ways the PRH can be broken. We also show that in the EDLN model, the emergence of the Platonic representations is due to the same reason as the emergence of progressive sharpening. This implies that these two seemingly unrelated phenomena in deep learning can, surprisingly, have a common cause. Overall, the theory and proof highlight the importance of understanding emergent "entropic forces" due to the irreversibility of SGD training and their role in representation learning. The goal of this note is to be instructive while avoiding jargon and lengthy technical details. + oai:arXiv.org:2507.01098v2 cs.LG - cs.AI + cond-mat.dis-nn + q-bio.NC stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yang Xu, Swetha Ganesh, Vaneet Aggarwal + Liu Ziyin, Isaac Chuang - Bayesian power spectral density estimation for LISA noise based on P-splines with a parametric boost - https://arxiv.org/abs/2510.00533 - arXiv:2510.00533v2 Announce Type: replace-cross -Abstract: Flexible and accurate noise characterization is crucial for the precise estimation of gravitational-wave parameters. We introduce a Bayesian method for estimating the power spectral density (PSD) of long, stationary time series, explicitly tailored for LISA data analysis. Our approach models the PSD as the geometric mean of a parametric and a nonparametric component, combining the knowledge from parametric models with the flexibility to capture deviations from theoretical expectations. The nonparametric component is expressed by a mixture of penalized B-splines. Adaptive, data-driven knot placement, performed once at initialization, removes the need for reversible-jump Markov chain Monte Carlo, while hierarchical roughness-penalty priors prevent overfitting. Validation on simulated autoregressive AR(4) data demonstrates estimator consistency and shows that well-matched parametric components reduce the integrated absolute error compared to an uninformative baseline, requiring fewer spline knots to achieve comparable accuracy. Applied to one year of simulated LISA X-channel (univariate) noise, our method achieves relative integrated absolute errors of $\mathcal{O}(10^{-2})$, making it suitable for iterative analysis pipelines and multi-year mission data sets. - oai:arXiv.org:2510.00533v2 - gr-qc - astro-ph.IM - physics.comp-ph - stat.CO - Thu, 11 Dec 2025 00:00:00 -0500 + Dynamic Regret Reduces to Kernelized Static Regret + https://arxiv.org/abs/2507.05478 + arXiv:2507.05478v2 Announce Type: replace-cross +Abstract: We study dynamic regret in online convex optimization, where the objective is to achieve low cumulative loss relative to an arbitrary benchmark sequence. By observing that competing with an arbitrary sequence of comparators $u_{1},\ldots,u_{T}$ in $\mathcal{W}\subseteq\mathbb{R}^{d}$ is equivalent to competing with a fixed comparator function $u:[1,T]\to \mathcal{W}$, we frame dynamic regret minimization as a static regret problem in a function space. By carefully constructing a suitable function space in the form of a Reproducing Kernel Hilbert Space (RKHS), our reduction enables us to recover the optimal $R_{T}(u_{1},\ldots,u_{T}) = \mathcal{O}(\sqrt{\sum_{t}\|u_{t}-u_{t-1}\|T})$ dynamic regret guarantee in the setting of linear losses, and yields new scale-free and directionally-adaptive dynamic regret guarantees. Moreover, unlike prior dynamic-to-static reductions -- which are valid only for linear losses -- our reduction holds for any sequence of losses, allowing us to recover $\mathcal{O}\big(\|u\|^2_{\mathcal{H}}+d_{\mathrm{eff}}(\lambda)\ln T\big)$ bounds in exp-concave and improper linear regression settings, where $d_{\mathrm{eff}}(\lambda)$ is a measure of complexity of the RKHS. Despite working in an infinite-dimensional space, the resulting reduction leads to algorithms that are computable in practice, due to the reproducing property of RKHSs. + oai:arXiv.org:2507.05478v2 + cs.LG + stat.ML + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Nazeela Aimen, Patricio Maturana-Russel, Avi Vajpeyi, Nelson Christensen, Renate Meyer + http://creativecommons.org/licenses/by/4.0/ + Andrew Jacobsen, Alessandro Rudi, Francesco Orabona, Nicolo Cesa-Bianchi - A Generic Machine Learning Framework for Radio Frequency Fingerprinting - https://arxiv.org/abs/2510.09775 - arXiv:2510.09775v2 Announce Type: replace-cross -Abstract: Fingerprinting radio frequency (RF) emitters typically involves finding unique characteristics that are featured in their received signal. These fingerprints are nuanced, but sufficiently detailed, motivating the pursuit of methods that can successfully extract them. The downstream task that requires the most meticulous RF fingerprinting (RFF) is known as specific emitter identification (SEI), which entails recognising each individual transmitter. RFF and SEI have a long history, with numerous defence and civilian applications such as signal intelligence, electronic surveillance, physical-layer authentication of wireless devices, to name a few. In recent years, data-driven RFF approaches have become popular due to their ability to automatically learn intricate fingerprints. They generally deliver superior performance when compared to traditional RFF techniques that are often labour-intensive, inflexible, and only applicable to a particular emitter type or transmission scheme. In this paper, we present a generic and versatile machine learning (ML) framework for data-driven RFF with several popular downstream tasks such as SEI, data association (EDA) and RF emitter clustering (RFEC). It is emitter-type agnostic. We then demonstrate the introduced framework for several tasks using real RF datasets for spaceborne surveillance, signal intelligence and countering drones applications. - oai:arXiv.org:2510.09775v2 + Generalized Kernelized Bandits: A Novel Self-Normalized Bernstein-Like Dimension-Free Inequality and Regret Bounds + https://arxiv.org/abs/2508.01681 + arXiv:2508.01681v2 Announce Type: replace-cross +Abstract: We study the regret minimization problem in the novel setting of generalized kernelized bandits (GKBs), where we optimize an unknown function $f^*$ belonging to a reproducing kernel Hilbert space (RKHS) having access to samples generated by an exponential family (EF) reward model whose mean is a non-linear function $\mu(f^*)$. This setting extends both kernelized bandits (KBs) and generalized linear bandits (GLBs), providing a unified view of both settings. We propose an optimistic regret minimization algorithm, GKB-UCB, and we explain why existing self-normalized concentration inequalities used for KBs and GLBs do not allow to provide tight regret guarantees. For this reason, we devise a novel self-normalized Bernstein-like dimension-free inequality that applies to a Hilbert space of functions with bounded norm, representing a contribution of independent interest. Based on it, we analyze GKB-UCB, deriving a regret bound of order $\widetilde{O}( \gamma_T \sqrt{T/\kappa_*})$, being $T$ the learning horizon, ${\gamma}_T$ the maximal information gain, and $\kappa_*$ a term characterizing the magnitude of the expected reward non-linearity. Our result is tight in its dependence on $T$, $\gamma_T$, and $\kappa_*$ for both KBs and GLBs. Finally, we present a tractable version GKB-UCB, Trac-GKB-UCB, which attains similar regret guarantees, and we discuss its time and space complexity. + oai:arXiv.org:2508.01681v2 cs.LG - cs.CR stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Alex Hiles, Bashar I. Ahmad + http://creativecommons.org/licenses/by/4.0/ + Alberto Maria Metelli, Simone Drago, Marco Mussi - Online Price Competition under Generalized Linear Demands - https://arxiv.org/abs/2511.10718 - arXiv:2511.10718v3 Announce Type: replace-cross -Abstract: We study sequential price competition among $N$ sellers, each influenced by the pricing decisions of their rivals. Specifically, the demand function for each seller $i$ follows the single index model $\lambda_i(\mathbf{p}) = \mu_i(\langle \boldsymbol{\theta}_{i,0}, \mathbf{p} \rangle)$, with known increasing link $\mu_i$ and unknown parameter $\boldsymbol{\theta}_{i,0}$, where the vector $\mathbf{p}$ denotes the vector of prices offered by all the sellers simultaneously at a given instant. Each seller observes only their own realized demand -- unobservable to competitors -- and the prices set by rivals. Our framework generalizes existing approaches that focus solely on linear demand models. We propose a novel decentralized policy, PML-GLUCB, that combines penalized MLE with an upper-confidence pricing rule, removing the need for coordinated exploration phases across sellers -- which is integral to previous linear models -- and accommodating both binary and real-valued demand observations. Relative to a dynamic benchmark policy, each seller achieves $O(N^{2}\sqrt{T}\log(T))$ regret, which essentially matches the optimal rate known in the linear setting. A significant technical contribution of our work is the development of a variant of the elliptical potential lemma -- typically applied in single-agent systems -- adapted to our competitive multi-agent environment. - oai:arXiv.org:2511.10718v3 - cs.GT - math.ST - stat.ME - stat.TH - Thu, 11 Dec 2025 00:00:00 -0500 + If generative AI is the answer, what is the question? + https://arxiv.org/abs/2509.06120 + arXiv:2509.06120v2 Announce Type: replace-cross +Abstract: Beginning with text and images, generative AI has expanded to audio, video, computer code, and molecules. Yet, if generative AI is the answer, what is the question? We explore the foundations of generation as a distinct machine learning task with connections to prediction, compression, and decision-making. We survey five major generative model families: autoregressive models, variational autoencoders, normalizing flows, generative adversarial networks, and diffusion models. We then introduce a probabilistic framework that emphasizes the distinction between density estimation and generation. We review a game-theoretic framework with a two-player adversary-learner setup to study generation. We discuss post-training modifications that prepare generative models for deployment. We end by highlighting some important topics in socially responsible generation such as privacy, detection of AI-generated content, and copyright and IP. We adopt a task-first framing of generation, focusing on what generation is as a machine learning problem, rather than only on how models implement it. + oai:arXiv.org:2509.06120v2 + cs.LG + stat.ML + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Daniele Bracale, Moulinath Banerjee, Cong Shi, Yuekai Sun + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ambuj Tewari - Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory - https://arxiv.org/abs/2511.23083 - arXiv:2511.23083v2 Announce Type: replace-cross -Abstract: High-capacity kernel Hopfield networks exhibit a \textit{Ridge of Optimization} characterized by extreme stability. While previously linked to \textit{Spectral Concentration}, its origin remains elusive. Here, we analyze the network dynamics on a statistical manifold, revealing that the Ridge corresponds to the Edge of Stability, a critical boundary where the Fisher Information Matrix becomes singular. We demonstrate that the apparent Euclidean force antagonism is a manifestation of \textit{Dual Equilibrium} in the Riemannian space. This unifies learning dynamics and capacity via the Minimum Description Length principle, offering a geometric theory of self-organized criticality. - oai:arXiv.org:2511.23083v2 + Understanding Outer Optimizers in Local SGD: Learning Rates, Momentum, and Acceleration + https://arxiv.org/abs/2509.10439 + arXiv:2509.10439v2 Announce Type: replace-cross +Abstract: Modern machine learning often requires training with large batch size, distributed data, and massively parallel compute hardware (like mobile and other edge devices or distributed data centers). Communication becomes a major bottleneck in such settings but methods like Local Stochastic Gradient Descent (Local SGD) show great promise in reducing this additional communication overhead. Local SGD consists of three parts: a local optimization process, an aggregation mechanism, and an outer optimizer that uses the aggregated updates from the nodes to produce a new model. While there exists an extensive literature on understanding the impact of hyperparameters in the local optimization process, the choice of outer optimizer and its hyperparameters is less clear. We study the role of the outer optimizer in Local SGD, and prove new convergence guarantees for the algorithm. In particular, we show that tuning the outer learning rate allows us to (a) trade off between optimization error and stochastic gradient noise variance, and (b) make up for ill-tuning of the inner learning rate. Our theory suggests that the outer learning rate should sometimes be set to values greater than $1$. We extend our results to settings where we use momentum in the outer optimizer, and we show a similar role for the momentum-adjusted outer learning rate. We also study acceleration in the outer optimizer and show that it improves the convergence rate as a function of the number of communication rounds, improving upon the convergence rate of prior algorithms that apply acceleration locally. Finally, we also introduce a novel data-dependent analysis of Local SGD that yields further insights on outer learning rate tuning. We conduct comprehensive experiments with standard language models and various outer optimizers to validate our theory. + oai:arXiv.org:2509.10439v2 cs.LG - cs.NE + math.OC stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Akira Tamamori + Ahmed Khaled, Satyen Kale, Arthur Douillard, Chi Jin, Rob Fergus, Manzil Zaheer - A Multivariate Bernoulli-Based Sampling Method for Multi-Label Data with Application to Meta-Research - https://arxiv.org/abs/2512.08371 - arXiv:2512.08371v2 Announce Type: replace-cross -Abstract: Datasets may contain observations with multiple labels. If the labels are not mutually exclusive, and if the labels vary greatly in frequency, obtaining a sample that includes sufficient observations with scarcer labels to make inferences about those labels, and which deviates from the population frequencies in a known manner, creates challenges. In this paper, we consider a multivariate Bernoulli distribution as our underlying distribution of a multi-label problem. We present a novel sampling algorithm that takes label dependencies into account. It uses observed label frequencies to estimate multivariate Bernoulli distribution parameters and calculate weights for each label combination. This approach ensures the weighted sampling acquires target distribution characteristics while accounting for label dependencies. We applied this approach to a sample of research articles from Web of Science labeled with 64 biomedical topic categories. We aimed to preserve category frequency order, reduce frequency differences between most and least common categories, and account for category dependencies. This approach produced a more balanced sub-sample, enhancing the representation of minority categories. - oai:arXiv.org:2512.08371v2 + An Eulerian Perspective on Straight-Line Sampling + https://arxiv.org/abs/2510.11657 + arXiv:2510.11657v2 Announce Type: replace-cross +Abstract: We study dynamic measure transport for generative modeling: specifically, flows induced by stochastic processes that bridge a specified source and target distribution. The conditional expectation of the process' velocity defines an ODE whose flow map achieves the desired transport. We ask \emph{which processes produce straight-line flows} -- i.e., flows whose pointwise acceleration vanishes and thus are exactly integrable with a first-order method? We provide a concise PDE characterization of straightness as a balance between conditional acceleration and the divergence of a weighted covariance (Reynolds) tensor. Using this lens, we fully characterize affine-in-time interpolants and show that straightness occurs exactly under deterministic endpoint couplings. We also derive necessary conditions that constrain flow geometry for general processes, offering broad guidance for designing transports that are easier to integrate. + oai:arXiv.org:2510.11657v2 cs.LG stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Simon Chung, Colby J. Vorland, Donna L. Maney, Andrew W. Brown + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Panos Tsimpos, Youssef Marzouk + + + Principal component analysis in econometrics: a selective inference perspective + https://arxiv.org/abs/2511.10419 + arXiv:2511.10419v2 Announce Type: replace-cross +Abstract: We study the long-standing problem of determining the number of principal components in econometric applications from a selective inference perspective. We consider i.i.d. observations from a $p$-dimensional random vector with $p<n$ and define the ``true'' dimensionality as the rank of the population covariance matrix. Building on the sequential testing viewpoint, we propose a data-driven procedure that estimates $\rank(\Sigma_X)$ using a statistic that depends on the eigenvalues of the sample covariance matrix. While the test statistic shares the functional form of its fixed design counterpart Choi et al. (2017), our analysis departs from the non-stochastic setting by treating the design as random and by avoiding parametric Gaussian assumptions. Under a locally defined null hypothesis, we establish asymptotically exact type~I error controls in the sequential testing procedure, with simulation results indicating empirical validity of the proposed method. + oai:arXiv.org:2511.10419v2 + econ.EM + stat.AP + Fri, 12 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yasuyuki Matsumura, Chisato Tachibana DS FedProxGrad: Asymptotic Stationarity Without Noise Floor in Fair Federated Learning https://arxiv.org/abs/2512.08671 - arXiv:2512.08671v2 Announce Type: replace-cross + arXiv:2512.08671v3 Announce Type: replace-cross Abstract: Recent work \cite{arifgroup} introduced Federated Proximal Gradient \textbf{(\texttt{FedProxGrad})} for solving non-convex composite optimization problems in group fair federated learning. However, the original analysis established convergence only to a \textit{noise-dominated neighborhood of stationarity}, with explicit dependence on a variance-induced noise floor. In this work, we provide an improved asymptotic convergence analysis for a generalized \texttt{FedProxGrad}-type analytical framework with inexact local proximal solutions and explicit fairness regularization. We call this extended analytical framework \textbf{DS \texttt{FedProxGrad}} (Decay Step Size \texttt{FedProxGrad}). Under a Robbins-Monro step-size schedule \cite{robbins1951stochastic} and a mild decay condition on local inexactness, we prove that $\liminf_{r\to\infty} \mathbb{E}[\|\nabla F(\mathbf{x}^r)\|^2] = 0$, i.e., the algorithm is asymptotically stationary and the convergence rate does not depend on a variance-induced noise floor. - oai:arXiv.org:2512.08671v2 + oai:arXiv.org:2512.08671v3 cs.LG stat.ML - Thu, 11 Dec 2025 00:00:00 -0500 + Fri, 12 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ Huzaifa Arif