|
|
<?xml version='1.0' encoding='UTF-8'?> |
|
|
<rss xmlns:arxiv="http://arxiv.org/schemas/atom" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" version="2.0"> |
|
|
<channel> |
|
|
<title>stat updates on arXiv.org</title> |
|
|
<link>http://rss.arxiv.org/rss/stat</link> |
|
|
<description>stat updates on the arXiv.org e-print archive.</description> |
|
|
<atom:link href="http://rss.arxiv.org/rss/stat" rel="self" type="application/rss+xml"/> |
|
|
<docs>http://www.rssboard.org/rss-specification</docs> |
|
|
<language>en-us</language> |
|
|
<lastBuildDate>Fri, 19 Dec 2025 05:00:13 +0000</lastBuildDate> |
|
|
<managingEditor>[email protected]</managingEditor> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<skipDays> |
|
|
<day>Saturday</day> |
|
|
<day>Sunday</day> |
|
|
</skipDays> |
|
|
<item> |
|
|
<title>Consensus dimension reduction via multi-view learning</title> |
|
|
<link>https://arxiv.org/abs/2512.15802</link> |
|
|
<description>arXiv:2512.15802v1 Announce Type: new |
|
|
Abstract: A plethora of dimension reduction methods have been developed to visualize high-dimensional data in low dimensions. However, different dimension reduction methods often output different and possibly conflicting visualizations of the same data. This problem is further exacerbated by the choice of hyperparameters, which may substantially impact the resulting visualization. To obtain a more robust and trustworthy dimension reduction output, we advocate for a consensus approach, which summarizes multiple visualizations into a single consensus dimension reduction visualization. Here, we leverage ideas from multi-view learning in order to identify the patterns that are most stable or shared across the many different dimension reduction visualizations, or views, and subsequently visualize this shared structure in a single low-dimensional plot. We demonstrate that this consensus visualization effectively identifies and preserves the shared low-dimensional data structure through both simulated and real-world case studies. We further highlight our method's robustness to the choice of dimension reduction method and hyperparameters -- a highly-desirable property when working towards trustworthy and reproducible data science.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.15802v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Bingxue An, Tiffany M. Tang</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Adaptive Kalman Filter for Systems with Unknown Initial Values</title> |
|
|
<link>https://arxiv.org/abs/2512.15810</link> |
|
|
<description>arXiv:2512.15810v1 Announce Type: new |
|
|
Abstract: The models of partially observed linear stochastic differential equations with unknown initial values of the non-observed component are considered in two situations. In the first problem, the initial value is deterministic, and in the second problem, it is assumed to be a Gaussian random variable. The main problem is the computation of adaptive Kalman filters and the discussion of their asymptotic optimality. The realization of this program for both models is done in several steps. First, a preliminary estimator of the unknown parameter is constructed by observations on some learning interval. Then, this estimator is used for the calculation of recurrent one-step MLE estimators, which are subsequently substituted in the equations of Kalman filtration.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.15810v1</guid> |
|
|
<category>math.ST</category> |
|
|
<category>math.PR</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Yury A Kutoyants (UM)</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Modeling Issues with Eye Tracking Data</title> |
|
|
<link>https://arxiv.org/abs/2512.15950</link> |
|
|
<description>arXiv:2512.15950v1 Announce Type: new |
|
|
Abstract: I describe and compare procedures for binary eye-tracking (ET) data. These procedures are applied to both raw and compressed data. The basic GLMM model is a logistic mixed model combined with random effects for persons and items. Additional models address autocorrelation eye-tracking serial observations. In particular, two novel approaches are illustrated that address serial without the use of an observed lag-1 predictor: a first-order autoregressive model obtained with generalized estimating equations, and a recurrent two-state survival model. Altogether, the results of four different analyses point to unresolved issues in the analysis of eye-tracking data and new directions for analytic development.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.15950v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Gregory Camilli</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Reliability-Targeted Simulation of Item Response Data: Solving the Inverse Design Problem</title> |
|
|
<link>https://arxiv.org/abs/2512.16012</link> |
|
|
<description>arXiv:2512.16012v1 Announce Type: new |
|
|
Abstract: Monte Carlo simulations are the primary methodology for evaluating Item Response Theory (IRT) methods, yet marginal reliability - the fundamental metric of data informativeness - is rarely treated as an explicit design factor. Unlike in multilevel modeling where the intraclass correlation (ICC) is routinely manipulated, IRT studies typically treat reliability as an incidental outcome, creating a "reliability omission" that obscures the signal-to-noise ratio of generated data. To address this gap, we introduce a principled framework for reliability-targeted simulation, transforming reliability from an implicit by-product into a precise input parameter. We formalize the inverse design problem, solving for a global discrimination scaling factor that uniquely achieves a pre-specified target reliability. Two complementary algorithms are proposed: Empirical Quadrature Calibration (EQC) for rapid, deterministic precision, and Stochastic Approximation Calibration (SAC) for rigorous stochastic estimation. A comprehensive validation study across 960 conditions demonstrates that EQC achieves essentially exact calibration, while SAC remains unbiased across non-normal latent distributions and empirical item pools. Furthermore, we clarify the theoretical distinction between average-information and error-variance-based reliability metrics, showing they require different calibration scales due to Jensen's inequality. An accompanying open-source R package, IRTsimrel, enables researchers to standardize reliability as a controlled experimental input.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16012v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.CO</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>JoonHo Lee</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Maximum Likelihood Estimation for Scaled Inhomogeneous Phase-Type Distributions from Discrete Observations</title> |
|
|
<link>https://arxiv.org/abs/2512.16061</link> |
|
|
<description>arXiv:2512.16061v1 Announce Type: new |
|
|
Abstract: Inhomogeneous phase-type (IPH) distributions extend classical phase-type models by allowing transition intensities to vary over time, offering greater flexibility for modeling heavy-tailed or time-dependent absorption phenomena. We focus on the subclass of IPH distributions with time-scaled sub-intensity matrices of the form ${\Lambda}(t) = h_{\beta}(t){\Lambda}$, which admits a time transformation to a homogeneous Markov jump process. For this class, we develop a statistical inference framework for discretely observed trajectories that combines Markov-bridge reconstruction with a stochastic EM algorithm and a gradient-based up- date. The resulting method yields joint maximum-likelihood estimates of both the baseline sub-intensity matrix ${\Lambda}$ and the time-scaling parameter $\beta$. Through simulation studies for the matrix-Gompertz and matrix-Weibull families, and a real-data application to coronary allograft vasculopathy progression, we demonstrate that the proposed approach provides an accurate and computationally tractable tool for fitting time-scaled IPH models to irregular multi-state data.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16061v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Fernando Baltazar-Larios, Alejandra Quintos</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>BayesSum: Bayesian Quadrature in Discrete Spaces</title> |
|
|
<link>https://arxiv.org/abs/2512.16105</link> |
|
|
<description>arXiv:2512.16105v1 Announce Type: new |
|
|
Abstract: This paper addresses the challenging computational problem of estimating intractable expectations over discrete domains. Existing approaches, including Monte Carlo and Russian Roulette estimators, are consistent but often require a large number of samples to achieve accurate results. We propose a novel estimator, \emph{BayesSum}, which is an extension of Bayesian quadrature to discrete domains. It is more sample efficient than alternatives due to its ability to make use of prior information about the integrand through a Gaussian process. We show this through theory, deriving a convergence rate significantly faster than Monte Carlo in a broad range of settings. We also demonstrate empirically that our proposed method does indeed require fewer samples on several synthetic settings as well as for parameter estimation for Conway-Maxwell-Poisson and Potts models.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16105v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Sophia Seulkee Kang, Fran\c{c}ois-Xavier Briol, Toni Karvonen, Zonghao Chen</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>An Efficient Framework for Robust Sample Size Determination</title> |
|
|
<link>https://arxiv.org/abs/2512.16231</link> |
|
|
<description>arXiv:2512.16231v1 Announce Type: new |
|
|
Abstract: In many settings, robust data analysis involves computational methods for uncertainty quantification and statistical inference. To design frequentist studies that leverage robust analysis methods, suitable sample sizes to achieve desired power are often found by estimating sampling distributions of p-values via intensive simulation. Moreover, most sample size recommendations rely heavily on assumptions about a single data-generating process. Consequently, robustness in data analysis does not by itself imply robustness in study design, as examining sample size sensitivity to data-generating assumptions typically requires further simulations. We propose an economical alternative for determining sample sizes that are robust to multiple data-generating mechanisms. Applying our theoretical results that model p-values as a function of the sample size, we assess power across the sample size space using simulations conducted at only two sample sizes for each data-generating mechanism. We demonstrate the broad applicability of our methodology to study design based on M-estimators in both experimental and observational settings through a varied set of clinical examples.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16231v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights> |
|
|
<dc:creator>Luke Hagar, Andrew J. Martin</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>DAG Learning from Zero-Inflated Count Data Using Continuous Optimization</title> |
|
|
<link>https://arxiv.org/abs/2512.16233</link> |
|
|
<description>arXiv:2512.16233v1 Announce Type: new |
|
|
Abstract: We address network structure learning from zero-inflated count data by casting each node as a zero-inflated generalized linear model and optimizing a smooth, score-based objective under a directed acyclic graph constraint. Our Zero-Inflated Continuous Optimization (ZICO) approach uses node-wise likelihoods with canonical links and enforces acyclicity through a differentiable surrogate constraint combined with sparsity regularization. ZICO achieves superior performance with faster runtimes on simulated data. It also performs comparably to or better than common algorithms for reverse engineering gene regulatory networks. ZICO is fully vectorized and mini-batched, enabling learning on larger variable sets with practical runtimes in a wide range of domains.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16233v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Noriaki Sato, Marco Scutari, Shuichi Kawano, Rui Yamaguchi, Seiya Imoto</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Bayesian Empirical Bayes: Simultaneous Inference from Probabilistic Symmetries</title> |
|
|
<link>https://arxiv.org/abs/2512.16239</link> |
|
|
<description>arXiv:2512.16239v1 Announce Type: new |
|
|
Abstract: Empirical Bayes (EB) improves the accuracy of simultaneous inference "by learning from the experience of others" (Efron, 2012). Classical EB theory focuses on latent variables that are iid draws from a fitted prior (Efron, 2019). Modern applications, however, feature complex structure, like arrays, spatial processes, or covariates. How can we apply EB ideas to these settings? We propose a generalized approach to empirical Bayes based on the notion of probabilistic symmetry. Our method pairs a simultaneous inference problem-with an unknown prior-to a symmetry assumption on the joint distribution of the latent variables. Each symmetry implies an ergodic decomposition, which we use to derive a corresponding empirical Bayes method. We call this methodBayesian empirical Bayes (BEB). We show how BEB recovers the classical methods of empirical Bayes, which implicitly assume exchangeability. We then use it to extend EB to other probabilistic symmetries: (i) EB matrix recovery for arrays and graphs; (ii) covariate-assisted EB for conditional data; (iii) EB spatial regression under shift invariance. We develop scalable algorithms based on variational inference and neural networks. In simulations, BEB outperforms existing approaches to denoising arrays and spatial data. On real data, we demonstrate BEB by denoising a cancer gene-expression matrix and analyzing spatial air-quality data from New York City.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16239v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Bohan Wu, Eli N. Weinstein, David M. Blei</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>An Open Workflow Model for Improving Educational Video Design: Tools, Data, and Insights</title> |
|
|
<link>https://arxiv.org/abs/2512.16254</link> |
|
|
<description>arXiv:2512.16254v1 Announce Type: new |
|
|
Abstract: Educational videos are widely used across various instructional models in higher education to support flexible and self-paced learning. However, student engagement with these videos varies significantly depending on how they are designed. While several studies have identified potential influencing factors, there remains a lack of scalable tools and open datasets to support large-scale, data-driven improvements in video design. This study aims to advance data-driven approaches to educational video design. Its core contributions include: (1) a workflow model for analysing educational videos; (2) an open-source implementation for extracting video metadata and features; (3) an accessible, community-driven database of video attributes; (4) a case study applying the approach to two engineering courses; and (5) an initial machine learning-based analysis to explore the relative influence of various video characteristics on student engagement. This work lays the groundwork for a shared, evidence-based approach to educational video design.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16254v1</guid> |
|
|
<category>stat.AP</category> |
|
|
<category>physics.ed-ph</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights> |
|
|
<dc:creator>Mohamed Tolba, Olivia Kendall, Daniel Tudball Smith, Alexander Gregg, Tony Vo, Scott Wordley</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Repulsive g-Priors for Regression Mixtures</title> |
|
|
<link>https://arxiv.org/abs/2512.16276</link> |
|
|
<description>arXiv:2512.16276v1 Announce Type: new |
|
|
Abstract: Mixture regression models are powerful tools for capturing heterogeneous covariate-response relationships, yet classical finite mixtures and Bayesian nonparametric alternatives often suffer from instability or overestimation of clusters when component separability is weak. Recent repulsive priors improve parsimony in density mixtures by discouraging nearby components, but their direct extension to regression is nontrivial since separation must respect the predictive geometry induced by covariates. We propose a repulsive g-prior for regression mixtures that enforces separation in the Mahalanobis metric, penalizing components indistinguishable in the predictive mean space. This construction preserves conjugacy-like updates while introducing geometry-aware interactions, enabling efficient blocked-collapsed Gibbs sampling. Theoretically, we establish tractable normalizing bounds, posterior contraction rates, and shrinkage of tail mass on the number of components. Simulations under correlated and overlapping designs demonstrate improved clustering and prediction relative to independent, Euclidean-repulsive, and sparsity-inducing baselines.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16276v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Yuta Hayashida, Shonosuke Sugasawa</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Hazard-based distributional regression via ordinary differential equations</title> |
|
|
<link>https://arxiv.org/abs/2512.16336</link> |
|
|
<description>arXiv:2512.16336v1 Announce Type: new |
|
|
Abstract: The hazard function is central to the formulation of commonly used survival regression models such as the proportional hazards and accelerated failure time models. However, these models rely on a shared baseline hazard, which, when specified parametrically, can only capture limited shapes. To overcome this limitation, we propose a general class of parametric survival regression models obtained by modelling the hazard function using autonomous systems of ordinary differential equations (ODEs). Covariate information is incorporated via transformed linear predictors on the parameters of the ODE system. Our framework capitalises on the interpretability of parameters in common ODE systems, enabling the identification of covariate values that produce qualitatively distinct hazard shapes associated with different attractors of the system of ODEs. This provides deeper insights into how covariates influence survival dynamics. We develop efficient Bayesian computational tools, including parallelised evaluation of the log-posterior, which facilitates integration with general-purpose Markov Chain Monte Carlo samplers. We also derive conditions for posterior asymptotic normality, enabling fast approximations of the posterior. A central contribution of our work lies in the case studies. We demonstrate the methodology using clinical trial data with crossing survival curves, and a study of cancer recurrence times where our approach reveals how the efficacy of interventions (treatments) on hazard and survival are influenced by patient characteristics.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16336v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>J. A. Christen, F. J. Rubio</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Bayesian joint modelling of longitudinal biomarkers to enable extrapolation of overall survival: an application using larotrectinib trial clinical data</title> |
|
|
<link>https://arxiv.org/abs/2512.16340</link> |
|
|
<description>arXiv:2512.16340v1 Announce Type: new |
|
|
Abstract: Objectives To investigate the use of a Bayesian joint modelling approach to predict overall survival (OS) from immature clinical trial data using an intermediate biomarker. To compare the results with a typical parametric approach of extrapolation and observed survival from a later datacut. |
|
|
Methods Data were pooled from three phase I/II open-label trials evaluating larotrectinib in 196 patients with neurotrophic tyrosine receptor kinase fusion-positive (NTRK+) solid tumours followed up until July 2021. Bayesian joint modelling was used to obtain patient-specific predictions of OS using individual-level sum of diameter of target lesions (SLD) profiles up to the time at which the patient died or was censored. Overall and tumour site-specific estimates were produced, assuming a common, exchangeable, or independent association structure across tumour sites. |
|
|
Results The overall risk of mortality was 9% higher per 10mm increase in SLD (HR 1.09, 95% CrI 1.05 to 1.14) for all tumour sites combined. Tumour-specific point estimates of restricted mean , median and landmark survival were more similar across models for larger tumour groups, compared to smaller tumour groups. In general, parameters were estimated with more certainty compared to a standard Weibull model and were aligned with the more recent datacut. |
|
|
Conclusions Joint modelling using intermediate outcomes such as tumour burden can offer an alternative approach to traditional survival modelling and may improve survival predictions from limited follow-up data. This approach allows complex hierarchical data structures, such as patients nested within tumour types, and can also incorporate multiple longitudinal biomarkers in a multivariate modelling framework.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16340v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Louise Linsell, Noman Paracha, Jamie Grossman, Carsten Bokemeyer, Jesus Garcia-Foncillas, Antoine Italiano, Gilles Vassal, Yuxian Chen, Barbara Torlinska, Keith R Abrams</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Empirical Likelihood Meets Prediction-Powered Inference</title> |
|
|
<link>https://arxiv.org/abs/2512.16363</link> |
|
|
<description>arXiv:2512.16363v1 Announce Type: new |
|
|
Abstract: We study inference with a small labeled sample, a large unlabeled sample, and high-quality predictions from an external model. We link prediction-powered inference with empirical likelihood by stacking supervised estimating equations based on labeled outcomes with auxiliary moment conditions built from predictions, and then optimizing empirical likelihood under these joint constraints. The resulting empirical likelihood-based prediction-powered inference (EPI) estimator is asymptotically normal, has asymptotic variance no larger than the fully supervised estimator, and attains the semiparametric efficiency bound when the auxiliary functions span the predictable component of the supervised score. For hypothesis testing and confidence sets, empirical likelihood ratio statistics admit chi-squared-type limiting distributions. As a by-product, the empirical likelihood weights induce a calibrated empirical distribution that integrates supervised and prediction-based information, enabling estimation and uncertainty quantification for general functionals beyond parameters defined by estimating equations. We present two practical implementations: one based on basis expansions in the predictions and covariates, and one that learns an approximately optimal auxiliary function by cross-fitting. In simulations and applications, EPI reduces mean squared error and shortens confidence intervals while maintaining nominal coverage.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16363v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Guanghui Wang, Mengtao Wen, Changliang Zou</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Asymptotic and finite-sample distributions of one- and two-sample empirical relative entropy, with application to change-point detection</title> |
|
|
<link>https://arxiv.org/abs/2512.16411</link> |
|
|
<description>arXiv:2512.16411v1 Announce Type: new |
|
|
Abstract: Relative entropy, as a divergence metric between two distributions, can be used for offline change-point detection and extends classical methods that mainly rely on moment-based discrepancies. To build a statistical test suitable for this context, we study the distribution of empirical relative entropy and derive several types of approximations: concentration inequalities for finite samples, asymptotic distributions, and Berry-Esseen bounds in a pre-asymptotic regime. For the latter, we introduce a new approach to obtain Berry-Esseen inequalities for nonlinear functions of sum statistics under some convexity assumptions. Our theoretical contributions cover both one- and two-sample empirical relative entropies. We then detail a change-point detection procedure built on relative entropy and compare it, through extensive simulations, with classical methods based on moments or on information criteria. Finally, we illustrate its practical relevance on two real datasets involving temperature series and volatility of stock indices.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16411v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>math.ST</category> |
|
|
<category>q-fin.ST</category> |
|
|
<category>q-fin.TR</category> |
|
|
<category>stat.AP</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-sa/4.0/</dc:rights> |
|
|
<dc:creator>Matthieu Garcin, Louis Perot</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Dynamic Prediction for Hospital Readmission in Patients with Chronic Heart Failure</title> |
|
|
<link>https://arxiv.org/abs/2512.16463</link> |
|
|
<description>arXiv:2512.16463v1 Announce Type: new |
|
|
Abstract: Hospital readmission among patients with chronic heart failure (HF) is a major clinical and economic burden. Dynamic prediction models that leverage longitudinal biomarkers may improve risk stratification over traditional static models. This study aims to develop and validate a joint model (JM) using longitudinal N-terminal pro-B-type natriuretic peptide (NT-proBNP) measurements to predict the risk of rehospitalization or death in HF patients. We analyzed real-world data from the TriNetX database, including patients with an incident HF diagnosis between 2016 and 2022. The final selected cohort included 1,804 patients. A Bayesian joint modeling framework was developed to link patient-specific NT-proBNP trajectories to the risk of a composite endpoint (HF rehospitalization or all-cause mortality) within a 180-day window following hospital discharge. The model's performance was evaluated using 5-fold cross-validation and assessed with the Integrated Brier Score (IBS) and Integrated Calibration Index (ICI). The joint model demonstrated a strong predictive advantage over a benchmark static model, particularly when making updated predictions at later time points (180-360 days). A joint model trained on patients with more frequent NT-proBNP measurements achieved the highest accuracy. The main joint model showed excellent calibration, suggesting its risk estimates are reliable. These findings suggest that modeling the full trajectory of NT-proBNP with a joint modeling framework enables more accurate and dynamic risk assessment compared to static, single-timepoint methods. This approach supports the development of adaptive clinical decision-support tools for personalized HF management.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16463v1</guid> |
|
|
<category>stat.AP</category> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Rebecca Farina, Francois Mercier, Christian Wohlfart, Serge Masson, Silvia Metelli</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Efficient and scalable clustering of survival curves</title> |
|
|
<link>https://arxiv.org/abs/2512.16481</link> |
|
|
<description>arXiv:2512.16481v1 Announce Type: new |
|
|
Abstract: Survival analysis encompasses a broad range of methods for analyzing time-to-event data, with one key objective being the comparison of survival curves across groups. Traditional approaches for identifying clusters of survival curves often rely on computationally intensive bootstrap techniques to approximate the null hypothesis distribution. While effective, these methods impose significant computational burdens. In this work, we propose a novel approach that leverages the k-means and log-rank test to efficiently identify and cluster survival curves. Our method eliminates the need for computationally expensive resampling, significantly reducing processing time while maintaining statistical reliability. By systematically evaluating survival curves and determining optimal clusters, the proposed method ensures a practical and scalable alternative for large-scale survival data analysis. Through simulation studies, we demonstrate that our approach achieves results comparable to existing bootstrap-based clustering methods while dramatically improving computational efficiency. These findings suggest that the log-rank-based clustering procedure offers a viable and time-efficient solution for researchers working with multiple survival curves in medical and epidemiological studies.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16481v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.CO</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-sa/4.0/</dc:rights> |
|
|
<dc:creator>Nora M. Villanueva, Marta Sestelo, Luis Meira-Machado</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Advantages and limitations in the use of transfer learning for individual treatment effects in causal machine learning</title> |
|
|
<link>https://arxiv.org/abs/2512.16489</link> |
|
|
<description>arXiv:2512.16489v1 Announce Type: new |
|
|
Abstract: Generalizing causal knowledge across diverse environments is challenging, especially when estimates from large-scale datasets must be applied to smaller or systematically different contexts, where external validity is critical. Model-based estimators of individual treatment effects (ITE) from machine learning require large sample sizes, limiting their applicability in domains such as behavioral sciences with smaller datasets. We demonstrate how estimation of ITEs with Treatment Agnostic Representation Networks (TARNet; Shalit et al., 2017) can be improved by leveraging knowledge from source datasets and adapting it to new settings via transfer learning (TL-TARNet; Aloui et al., 2023). In simulations that vary source and sample sizes and consider both randomized and non-randomized intervention target settings, the transfer-learning extension TL-TARNet improves upon standard TARNet, reducing ITE error and attenuating bias when a large unbiased source is available and target samples are small. In an empirical application using the India Human Development Survey (IHDS-II), we estimate the effect of mothers' firewood collection time on children's weekly study time; transfer learning pulls the target mean ITEs toward the source ITE estimate, reducing bias in the estimates obtained without transfer. These results suggest that transfer learning for causal models can improve the estimation of ITE in small samples.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16489v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-sa/4.0/</dc:rights> |
|
|
<dc:creator>Seyda Betul Aydin, Holger Brandt</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Extending a Matrix Lie Group Model of Measurement Symmetries</title> |
|
|
<link>https://arxiv.org/abs/2512.16547</link> |
|
|
<description>arXiv:2512.16547v1 Announce Type: new |
|
|
Abstract: Symmetry principles underlie and guide scientific theory and research, from Curie's invariance formulation to modern applications across physics, chemistry, and mathematics. Building on a recent matrix Lie group measurement model, this paper extends the framework to identify additional measurement symmetries implied by Lie group theory. Lie groups provide the mathematics of continuous symmetries, while Lie algebras serve as their infinitesimal generators. Within applied measurement theory, the preservation of symmetries in transformation groups acting on score frequency distributions ensure invariance in transformed distributions, with implications for validity, comparability, and conservation of information. A simulation study demonstrates how breaks in measurement symmetry affect score distribution symmetry and break effect size comparability. Practical applications are considered, particularly in meta analysis, where the standardized mean difference (SMD) is shown to remain invariant across measures only under specific symmetry conditions derived from the Lie group model. These results underscore symmetry as a unifying principle in measurement theory and its role in evidence based research.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16547v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>William R. Nugent</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Opening the House: Datasets for Mixed Doubles Curling</title> |
|
|
<link>https://arxiv.org/abs/2512.16574</link> |
|
|
<description>arXiv:2512.16574v1 Announce Type: new |
|
|
Abstract: We introduce the most comprehensive publicly available datasets for mixed doubles curling, constructed from eleven top-level tournaments from the CurlIT (https://curlit.com/results) Results Booklets spanning 53 countries, 1,112 games, and nearly 70,000 recorded shots. While curling analytics has grown in recent years, mixed doubles remains under-served due to limited access to data. Using a combined text-scraping and image-processing pipeline, we extract and standardize detailed game- and shot-level information, including player statistics, hammer possession, Power Play usage, stone coordinates, and post-shot scoring states. We describe the data engineering workflow, highlight challenges in parsing historical records, and derive additional contextual features that enable rigorous strategic analysis. Using these datasets, we present initial insights into shot selection and success rates, scoring distributions, and team efficiencies, illustrating key differences between mixed doubles and traditional 4-player curling. We highlight various ways to analyze this type of data including from a shot-, end-, game- or team-level to display its versatilely. The resulting resources provide a foundation for advanced performance modeling, strategic evaluation, and future research in mixed doubles curling analytics, supporting broader analytical engagement with this rapidly growing discipline.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16574v1</guid> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights> |
|
|
<dc:creator>Robyn Ritchie, Alexandre Leblanc, Thomas Loughin</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Quantile-based causal inference for spatio-temporal processes: Assessing the impacts of wildfires on US air quality</title> |
|
|
<link>https://arxiv.org/abs/2512.16603</link> |
|
|
<description>arXiv:2512.16603v1 Announce Type: new |
|
|
Abstract: Wildfires pose an increasingly severe threat to air quality, yet quantifying their causal impact remains challenging due to unmeasured meteorological and geographic confounders. Moreover, wildfire impacts on air quality may exhibit heterogeneous effects across pollution levels, which conventional mean-based causal methods fail to capture. To address these challenges, we develop a Quantile-based Latent Spatial Confounder Model (QLSCM) that substitutes conditional expectations with conditional quantiles, enabling causal analysis across the entire outcome distribution. We establish the causal interpretation of QLSCM theoretically, prove the identifiability of causal effects, and demonstrate estimator consistency under mild conditions. Simulations confirm the bias correction capability and the advantage of quantile-based inference over mean-based approaches. Applying our method to contiguous US wildfire and air quality data, we uncover important heterogeneous effects: fire radiative power exerts significant positive causal effects on aerosol optical depth at high quantiles in Western states like California and Oregon, while insignificant at lower quantiles. This indicates that wildfire impacts on air quality primarily manifest during extreme pollution events. Regional analyses reveal that Western and Northwestern regions experience the strongest causal effects during such extremes. These findings provide critical insights for environmental policy by identifying where and when mitigation efforts would be most effective.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16603v1</guid> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Zipei Geng, Jordan Richards, Raphael Huser, Marc G. Genton</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Riemannian Stochastic Interpolants for Amorphous Particle Systems</title> |
|
|
<link>https://arxiv.org/abs/2512.16607</link> |
|
|
<description>arXiv:2512.16607v1 Announce Type: new |
|
|
Abstract: Modern generative models hold great promise for accelerating diverse tasks involving the simulation of physical systems, but they must be adapted to the specific constraints of each domain. Significant progress has been made for biomolecules and crystalline materials. Here, we address amorphous materials (glasses), which are disordered particle systems lacking atomic periodicity. Sampling equilibrium configurations of glass-forming materials is a notoriously slow and difficult task. This obstacle could be overcome by developing a generative framework capable of producing equilibrium configurations with well-defined likelihoods. In this work, we address this challenge by leveraging an equivariant Riemannian stochastic interpolation framework which combines Riemannian stochastic interpolant and equivariant flow matching. Our method rigorously incorporates periodic boundary conditions and the symmetries of multi-component particle systems, adapting an equivariant graph neural network to operate directly on the torus. Our numerical experiments on model amorphous systems demonstrate that enforcing geometric and symmetry constraints significantly improves generative performance.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16607v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cond-mat.stat-mech</category> |
|
|
<category>cs.LG</category> |
|
|
<category>physics.comp-ph</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Louis Grenioux, Leonardo Galliano, Ludovic Berthier, Giulio Biroli, Marylou Gabri\'e</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Exponentially weighted estimands and the exponential family: filtering, prediction and smoothing</title> |
|
|
<link>https://arxiv.org/abs/2512.16745</link> |
|
|
<description>arXiv:2512.16745v1 Announce Type: new |
|
|
Abstract: We propose using a discounted version of a convex combination of the log-likelihood with the corresponding expected log-likelihood such that when they are maximized they yield a filter, predictor and smoother for time series. This paper then focuses on working out the implications of this in the case of the canonical exponential family. The results are simple exact filters, predictors and smoothers with linear recursions. A theory for these models is developed and the models are illustrated on simulated and real data.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16745v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>econ.EM</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Simon Donker van Heel, Neil Shephard</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Shift-Aware Gaussian-Supremum Validation for Wasserstein-DRO CVaR Portfolios</title> |
|
|
<link>https://arxiv.org/abs/2512.16748</link> |
|
|
<description>arXiv:2512.16748v1 Announce Type: new |
|
|
Abstract: We study portfolio selection with a Conditional Value-at-Risk (CVaR) constraint under distribution shift and serial dependence. While Wasserstein distributionally robust optimization (DRO) offers tractable protection via an ambiguity ball around empirical data, choosing the ball radius is delicate: large radii are conservative, small radii risk violation under regime change. We propose a shift-aware Gaussian-supremum (GS) validation framework for Wasserstein-DRO CVaR portfolios, building on the work by Lam and Qian (2019). Phase I of the framework generates a candidate path by solving the exact reformulation of the robust CVaR constraint over a grid of Wasserstein radii. Phase II of the framework learns a target deployment law $Q$ by density-ratio reweighting of a time-ordered validation fold, computes weighted CVaR estimates, and calibrates a simultaneous upper confidence band via a block multiplier bootstrap to account for dependence. We select the least conservative feasible portfolio (or abstain if the effective sample size collapses). Theoretically, we extend the normalized GS validator to non-i.i.d. financial data: under weak dependence and regularity of the weighted scores, any portfolio passing our validator satisfies the CVaR limit under $Q$ with probability at least $1-\beta$; the Wasserstein term contributes a deterministic margin $(\delta/\alpha)\|x\|_*$. Empirical results indicate improved return-risk trade-offs versus the naive baseline.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16748v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Derek Long</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Rao-Blackwellized e-variables</title> |
|
|
<link>https://arxiv.org/abs/2512.16759</link> |
|
|
<description>arXiv:2512.16759v1 Announce Type: new |
|
|
Abstract: We show that for any concave utility, the expected utility of an e-variable can only increase after conditioning on a sufficient statistic. The simplest form of the result has an extremely straightforward proof, which follows from a single application of Jensen's inequality. Similar statements hold for compound e-variables, asymptotic e-variables, and e-processes. These results echo the Rao-Blackwell theorem, which states that the expected squared error of an estimator can only decrease after conditioning on a sufficient statistic. We provide several applications of this insight, including a simplified derivation of the log-optimal e-variable for linear regression with known variance.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16759v1</guid> |
|
|
<category>math.ST</category> |
|
|
<category>math.PR</category> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Dante de Roos, Ben Chugg, Peter Gr\"unwald, Aaditya Ramdas</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>On The Hidden Biases of Flow Matching Samplers</title> |
|
|
<link>https://arxiv.org/abs/2512.16768</link> |
|
|
<description>arXiv:2512.16768v1 Announce Type: new |
|
|
Abstract: We study the implicit bias of flow matching (FM) samplers via the lens of empirical flow matching. Although population FM may produce gradient-field velocities resembling optimal transport (OT), we show that the empirical FM minimizer is almost never a gradient field, even when each conditional flow is. Consequently, empirical FM is intrinsically energetically suboptimal. In view of this, we analyze the kinetic energy of generated samples. With Gaussian sources, both instantaneous and integrated kinetic energies exhibit exponential concentration, while heavy-tailed sources lead to polynomial tails. These behaviors are governed primarily by the choice of source distribution rather than the data. Overall, these notes provide a concise mathematical account of the structural and energetic biases arising in empirical FM.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16768v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<category>math.PR</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Soon Hoe Lim</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Distributed inference for heterogeneous mixture models using multi-site data</title> |
|
|
<link>https://arxiv.org/abs/2512.16833</link> |
|
|
<description>arXiv:2512.16833v1 Announce Type: new |
|
|
Abstract: Mixture models postulate the overall population as a mixture of finite subpopulations with unobserved membership. Fitting mixture models usually requires large sample sizes and combining data from multiple sites can be beneficial. However, sharing individual participant data across sites is often less feasible due to various types of practical constraints, such as data privacy concerns. Moreover, substantial heterogeneity may exist across sites, and locally identified latent classes may not be comparable across sites. We propose a unified modeling framework where a common definition of the latent classes is shared across sites and heterogeneous mixing proportions of latent classes are allowed to account for between-site heterogeneity. To fit the heterogeneous mixture model on multi-site data, we propose a novel distributed Expectation-Maximization (EM) algorithm where at each iteration a density ratio tilted surrogate Q function is constructed to approximate the standard Q function of the EM algorithm as if the data from multiple sites could be pooled together. Theoretical analysis shows that our estimator achieves the same contraction property as the estimators derived from the EM algorithm based on the pooled data.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16833v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Xiaokang Liu, Rui Duan, Raymond J. Carroll, Yang Ning, Yong Chen</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Identification and efficient estimation of compliance and network causal effects in cluster-randomized trials</title> |
|
|
<link>https://arxiv.org/abs/2512.16857</link> |
|
|
<description>arXiv:2512.16857v1 Announce Type: new |
|
|
Abstract: Treatment noncompliance is pervasive in infectious disease cluster-randomized trials. Although all individuals within a cluster are assigned the same treatment condition, the treatment uptake status may vary across individuals due to noncompliance. We propose a semiparametric framework to evaluate the individual compliance effect and network assignment effect within principal stratum exhibiting different patterns of noncompliance. The individual compliance effect captures the portion of the treatment effect attributable to changes in treatment receipt, while the network assignment effect reflects the pure impact of treatment assignment and spillover among individuals within the same cluster. Unlike prior efforts which either empirically identify or interval identify these estimands, we characterize new structural assumptions for nonparametric point identification. We then develop semiparametrically efficient estimators that combine data-adaptive machine learning methods with efficient influence functions to enable more robust inference. Additionally, we introduce sensitivity analysis methods to study the impact under assumption violations, and apply the proposed methods to reanalyze a cluster-randomized trial in Kenya that evaluated the impact of school-based mass deworming on disease transmission.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16857v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights> |
|
|
<dc:creator>Chao Cheng, Georgia Papadogeorgou, Fan Li</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Hidden Order in Trades Predicts the Size of Price Moves</title> |
|
|
<link>https://arxiv.org/abs/2512.15720</link> |
|
|
<description>arXiv:2512.15720v1 Announce Type: cross |
|
|
Abstract: Financial markets exhibit an apparent paradox: while directional price movements remain largely unpredictable--consistent with weak-form efficiency--the magnitude of price changes displays systematic structure. Here we demonstrate that real-time order-flow entropy, computed from a 15-state Markov transition matrix at second resolution, predicts the magnitude of intraday returns without providing directional information. Analysis of 38.5 million SPY trades over 36 trading days reveals that conditioning on entropy below the 5th percentile increases subsequent 5-minute absolute returns by a factor of 2.89 (t = 12.41, p < 0.0001), while directional accuracy remains at 45.0%--statistically indistinguishable from chance (p = 0.12). This decoupling arises from a fundamental symmetry: entropy is invariant under sign permutation, detecting the presence of informed trading without revealing its direction. Walk-forward validation across five non-overlapping test periods confirms out-of-sample predictability, and label-permutation placebo tests yield z = 14.4 against the null. These findings suggest that information-theoretic measures may serve as volatility state variables in market microstructure, though the limited sample (36 days, single instrument) requires extended validation.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.15720v1</guid> |
|
|
<category>q-fin.TR</category> |
|
|
<category>q-fin.ST</category> |
|
|
<category>stat.AP</category> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Mainak Singha</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Decision-Focused Bias Correction for Fluid Approximation</title> |
|
|
<link>https://arxiv.org/abs/2512.15726</link> |
|
|
<description>arXiv:2512.15726v1 Announce Type: cross |
|
|
Abstract: Fluid approximation is a widely used approach for solving two-stage stochastic optimization problems, with broad applications in service system design such as call centers and healthcare operations. However, replacing the underlying random distribution (e.g., demand distribution) with its mean (e.g., the time-varying average arrival rate) introduces bias in performance estimation and can lead to suboptimal decisions. In this paper, we investigate how to identify an alternative point statistic, which is not necessarily the mean, such that substituting this statistic into the two-stage optimization problem yields the optimal decision. We refer to this statistic as the decision-corrected point estimate (time-varying arrival rate). For a general service network with customer abandonment costs, we establish necessary and sufficient conditions for the existence of such a corrected point estimate and propose an algorithm for its computation. Under a decomposable network structure, we further show that the resulting decision-corrected point estimate is closely related to the classical newsvendor solution. Numerical experiments demonstrate the superiority of our decision-focused correction method compared to the traditional fluid approximation.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.15726v1</guid> |
|
|
<category>math.OC</category> |
|
|
<category>math.PR</category> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Can Er, Mo Liu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Data Valuation for LLM Fine-Tuning: Efficient Shapley Value Approximation via Language Model Arithmetic</title> |
|
|
<link>https://arxiv.org/abs/2512.15765</link> |
|
|
<description>arXiv:2512.15765v1 Announce Type: cross |
|
|
Abstract: Data is a critical asset for training large language models (LLMs), alongside compute resources and skilled workers. While some training data is publicly available, substantial investment is required to generate proprietary datasets, such as human preference annotations or to curate new ones from existing sources. As larger datasets generally yield better model performance, two natural questions arise. First, how can data owners make informed decisions about curation strategies and data sources investment? Second, how can multiple data owners collaboratively pool their resources to train superior models while fairly distributing the benefits? This problem, data valuation, which is not specific to large language models, has been addressed by the machine learning community through the lens of cooperative game theory, with the Shapley value being the prevalent solution concept. However, computing Shapley values is notoriously expensive for data valuation, typically requiring numerous model retrainings, which can become prohibitive for large machine learning models. In this work, we demonstrate that this computational challenge is dramatically simplified for LLMs trained with Direct Preference Optimization (DPO). We show how the specific mathematical structure of DPO enables scalable Shapley value computation. We believe this observation unlocks many applications at the intersection of data valuation and large language models.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.15765v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.GT</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-sa/4.0/</dc:rights> |
|
|
<dc:creator>M\'elissa Tamine, Otmane Sakhi, Benjamin Heymann</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>TENG++: Time-Evolving Natural Gradient for Solving PDEs With Deep Neural Nets under General Boundary Conditions</title> |
|
|
<link>https://arxiv.org/abs/2512.15771</link> |
|
|
<description>arXiv:2512.15771v1 Announce Type: cross |
|
|
Abstract: Partial Differential Equations (PDEs) are central to modeling complex systems across physical, biological, and engineering domains, yet traditional numerical methods often struggle with high-dimensional or complex problems. Physics-Informed Neural Networks (PINNs) have emerged as an efficient alternative by embedding physics-based constraints into deep learning frameworks, but they face challenges in achieving high accuracy and handling complex boundary conditions. In this work, we extend the Time-Evolving Natural Gradient (TENG) framework to address Dirichlet boundary conditions, integrating natural gradient optimization with numerical time-stepping schemes, including Euler and Heun methods, to ensure both stability and accuracy. By incorporating boundary condition penalty terms into the loss function, the proposed approach enables precise enforcement of Dirichlet constraints. Experiments on the heat equation demonstrate the superior accuracy of the Heun method due to its second-order corrections and the computational efficiency of the Euler method for simpler scenarios. This work establishes a foundation for extending the framework to Neumann and mixed boundary conditions, as well as broader classes of PDEs, advancing the applicability of neural network-based solvers for real-world problems.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.15771v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.NA</category> |
|
|
<category>math.NA</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Xinjie He, Chenggong Zhang</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Make the most of what you have: Resource-efficient randomized algorithms for matrix computations</title> |
|
|
<link>https://arxiv.org/abs/2512.15929</link> |
|
|
<description>arXiv:2512.15929v1 Announce Type: cross |
|
|
Abstract: In recent years, randomized algorithms have established themselves as fundamental tools in computational linear algebra, with applications in scientific computing, machine learning, and quantum information science. Many randomized matrix algorithms proceed by first collecting information about a matrix and then processing that data to perform some computational task. This thesis addresses the following question: How can one design algorithms that use this information as efficiently as possible, reliably achieving the greatest possible speed and accuracy for a limited data budget? |
|
|
The first part of this thesis focuses on low-rank approximation for positive-semidefinite matrices. Here, the goal is to compute an accurate approximation to a matrix after accessing as few entries of the matrix as possible. This part of the thesis explores the randomly pivoted Cholesky (RPCholesky) algorithm for this task, which achieves a level of speed and reliability greater than other methods for the same problem. |
|
|
The second part of this thesis considers the task of estimating attributes of an implicit matrix accessible only by matrix-vector products. This thesis describes the leave-one-out approach to developing matrix attribute estimation algorithms and develops optimized trace, diagonal, and row-norm estimation algorithms. |
|
|
The third part of this thesis considers randomized algorithms for overdetermined linear least squares problems. Randomized algorithms for linear-least squares problems are asymptotically faster than any known deterministic algorithm, but recent work has raised questions about the accuracy of these methods in floating point arithmetic. This thesis shows these issues are resolvable by developing fast randomized least-squares problem achieving backward stability, the gold-standard stability guarantee for a numerical algorithm.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.15929v1</guid> |
|
|
<category>math.NA</category> |
|
|
<category>cs.DS</category> |
|
|
<category>cs.NA</category> |
|
|
<category>stat.CO</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<arxiv:DOI>10.7907/pef3-mg80</arxiv:DOI> |
|
|
<dc:creator>Ethan N. Epperly</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>xtdml: Double Machine Learning Estimation to Static Panel Data Models with Fixed Effects in R</title> |
|
|
<link>https://arxiv.org/abs/2512.15965</link> |
|
|
<description>arXiv:2512.15965v1 Announce Type: cross |
|
|
Abstract: The double machine learning (DML) method combines the predictive power of machine learning with statistical estimation to conduct inference about the structural parameter of interest. This paper presents the R package `xtdml`, which implements DML methods for partially linear panel regression models with low-dimensional fixed effects, high-dimensional confounding variables, proposed by Clarke and Polselli (2025). The package provides functionalities to: (a) learn nuisance functions with machine learning algorithms from the `mlr3` ecosystem, (b) handle unobserved individual heterogeneity choosing among first-difference transformation, within-group transformation, and correlated random effects, (c) transform the covariates with min-max normalization and polynomial expansion to improve learning performance. We showcase the use of `xtdml` with both simulated and real longitudinal data.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.15965v1</guid> |
|
|
<category>econ.EM</category> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Annalivia Polselli</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Lifting Biomolecular Data Acquisition</title> |
|
|
<link>https://arxiv.org/abs/2512.15984</link> |
|
|
<description>arXiv:2512.15984v1 Announce Type: cross |
|
|
Abstract: One strategy to scale up ML-driven science is to increase wet lab experiments' information density. We present a method based on a neural extension of compressed sensing to function space. We measure the activity of multiple different molecules simultaneously, rather than individually. Then, we deconvolute the molecule-activity map during model training. Co-design of wet lab experiments and learning algorithms provably leads to orders-of-magnitude gains in information density. We demonstrate on antibodies and cell therapies.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.15984v1</guid> |
|
|
<category>q-bio.BM</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights> |
|
|
<dc:creator>Eli N. Weinstein, Andrei Slabodkin, Mattia G. Gollub, Kerry Dobbs, Xiao-Bing Cui, Fang Zhang, Kristina Gurung, Elizabeth B. Wood</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Provably Extracting the Features from a General Superposition</title> |
|
|
<link>https://arxiv.org/abs/2512.15987</link> |
|
|
<description>arXiv:2512.15987v1 Announce Type: cross |
|
|
Abstract: It is widely believed that complex machine learning models generally encode features through linear representations, but these features exist in superposition, making them challenging to recover. We study the following fundamental setting for learning features in superposition from black-box query access: we are given query access to a function \[ f(x)=\sum_{i=1}^n a_i\,\sigma_i(v_i^\top x), \] where each unit vector $v_i$ encodes a feature direction and $\sigma_i:\mathbb{R} \rightarrow \mathbb{R}$ is an arbitrary response function and our goal is to recover the $v_i$ and the function $f$. |
|
|
In learning-theoretic terms, superposition refers to the overcomplete regime, when the number of features is larger than the underlying dimension (i.e. $n > d$), which has proven especially challenging for typical algorithmic approaches. Our main result is an efficient query algorithm that, from noisy oracle access to $f$, identifies all feature directions whose responses are non-degenerate and reconstructs the function $f$. Crucially, our algorithm works in a significantly more general setting than all related prior results -- we allow for essentially arbitrary superpositions, only requiring that $v_i, v_j$ are not nearly identical for $i \neq j$, and general response functions $\sigma_i$. At a high level, our algorithm introduces an approach for searching in Fourier space by iteratively refining the search space to locate the hidden directions $v_i$.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.15987v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.DS</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Allen Liu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>CauSTream: Causal Spatio-Temporal Representation Learning for Streamflow Forecasting</title> |
|
|
<link>https://arxiv.org/abs/2512.16046</link> |
|
|
<description>arXiv:2512.16046v1 Announce Type: cross |
|
|
Abstract: Streamflow forecasting is crucial for water resource management and risk mitigation. While deep learning models have achieved strong predictive performance, they often overlook underlying physical processes, limiting interpretability and generalization. Recent causal learning approaches address these issues by integrating domain knowledge, yet they typically rely on fixed causal graphs that fail to adapt to data. We propose CauStream, a unified framework for causal spatiotemporal streamflow forecasting. CauSTream jointly learns (i) a runoff causal graph among meteorological forcings and (ii) a routing graph capturing dynamic dependencies across stations. We further establish identifiability conditions for these causal structures under a nonparametric setting. We evaluate CauSTream on three major U.S. river basins across three forecasting horizons. The model consistently outperforms prior state-of-the-art methods, with performance gaps widening at longer forecast windows, indicating stronger generalization to unseen conditions. Beyond forecasting, CauSTream also learns causal graphs that capture relationships among hydrological factors and stations. The inferred structures align closely with established domain knowledge, offering interpretable insights into watershed dynamics. CauSTream offers a principled foundation for causal spatiotemporal modeling, with the potential to extend to a wide range of scientific and environmental applications.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16046v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.CE</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Shu Wan, Reepal Shah, John Sabo, Huan Liu, K. Sel\c{c}uk Candan</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Distributed Online Economic Dispatch With Time-Varying Coupled Inequality Constraints</title> |
|
|
<link>https://arxiv.org/abs/2512.16241</link> |
|
|
<description>arXiv:2512.16241v1 Announce Type: cross |
|
|
Abstract: We investigate the distributed online economic dispatch problem for power systems with time-varying coupled inequality constraints. The problem is formulated as a distributed online optimization problem in a multi-agent system. At each time step, each agent only observes its own instantaneous objective function and local inequality constraints; agents make decisions online and cooperate to minimize the sum of the time-varying objectives while satisfying the global coupled constraints. To solve the problem, we propose an algorithm based on the primal-dual approach combined with constraint-tracking. Under appropriate assumptions that the objective and constraint functions are convex, their gradients are uniformly bounded, and the path length of the optimal solution sequence grows sublinearly, we analyze theoretical properties of the proposed algorithm and prove that both the dynamic regret and the constraint violation are sublinear with time horizon T. Finally, we evaluate the proposed algorithm on a time-varying economic dispatch problem in power systems using both synthetic data and Australian Energy Market data. The results demonstrate that the proposed algorithm performs effectively in terms of tracking performance, constraint satisfaction, and adaptation to time-varying disturbances, thereby providing a practical and theoretically well-supported solution for real-time distributed economic dispatch.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16241v1</guid> |
|
|
<category>math.OC</category> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Yingjie Zhou, Xiaoqian Wang, Tao Li</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Multivariate Uncertainty Quantification with Tomographic Quantile Forests</title> |
|
|
<link>https://arxiv.org/abs/2512.16383</link> |
|
|
<description>arXiv:2512.16383v1 Announce Type: cross |
|
|
Abstract: Quantifying predictive uncertainty is essential for safe and trustworthy real-world AI deployment. Yet, fully nonparametric estimation of conditional distributions remains challenging for multivariate targets. We propose Tomographic Quantile Forests (TQF), a nonparametric, uncertainty-aware, tree-based regression model for multivariate targets. TQF learns conditional quantiles of directional projections $\mathbf{n}^{\top}\mathbf{y}$ as functions of the input $\mathbf{x}$ and the unit direction $\mathbf{n}$. At inference, it aggregates quantiles across many directions and reconstructs the multivariate conditional distribution by minimizing the sliced Wasserstein distance via an efficient alternating scheme with convex subproblems. Unlike classical directional-quantile approaches that typically produce only convex quantile regions and require training separate models for different directions, TQF covers all directions with a single model without imposing convexity restrictions. We evaluate TQF on synthetic and real-world datasets, and release the source code on GitHub.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16383v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Takuya Kanazawa</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Stackelberg Learning from Human Feedback: Preference Optimization as a Sequential Game</title> |
|
|
<link>https://arxiv.org/abs/2512.16626</link> |
|
|
<description>arXiv:2512.16626v1 Announce Type: cross |
|
|
Abstract: We introduce Stackelberg Learning from Human Feedback (SLHF), a new framework for preference optimization. SLHF frames the alignment problem as a sequential-move game between two policies: a Leader, which commits to an action, and a Follower, which responds conditionally on the Leader's action. This approach decomposes preference optimization into a refinement problem for the Follower and an optimization problem against an adversary for the Leader. Unlike Reinforcement Learning from Human Feedback (RLHF), which assigns scalar rewards to actions, or Nash Learning from Human Feedback (NLHF), which seeks a simultaneous-move equilibrium, SLHF leverages the asymmetry of sequential play to capture richer preference structures. The sequential design of SLHF naturally enables inference-time refinement, as the Follower learns to improve the Leader's actions, and these refinements can be leveraged through iterative sampling. We compare the solution concepts of SLHF, RLHF, and NLHF, and lay out key advantages in consistency, data sensitivity, and robustness to intransitive preferences. Experiments on large language models demonstrate that SLHF achieves strong alignment across diverse preference datasets, scales from 0.5B to 8B parameters, and yields inference-time refinements that transfer across model families without further fine-tuning.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16626v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.GT</category> |
|
|
<category>cs.MA</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Barna P\'asztor, Thomas Kleine Buening, Andreas Krause</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>The Colombian legislative process, 2014-2025: networks, topics, and polarization</title> |
|
|
<link>https://arxiv.org/abs/2512.16827</link> |
|
|
<description>arXiv:2512.16827v1 Announce Type: cross |
|
|
Abstract: The legislative output of Colombia's House of Representatives between 2014 and 2025 is analyzed using 4,083 bills. Bipartite networks are constructed between parties and bills, and between representatives and bills, along with their projections, to characterize co-sponsorship patterns, centrality, and influence, and to assess whether political polarization is reflected in legislative collaboration. In parallel, the content of the initiatives is studied through semantic networks based on co-occurrences extracted from short descriptions, and topics by party and period are identified using a stochastic block model for weighted networks, with additional comparison using Latent Dirichlet Allocation. In addition, a Bayesian sociability model is applied to detect terms with robust connectivity and to summarize discursive cores. Overall, the approach integrates relational and semantic structure to describe thematic shifts across administrations, identify influential actors and collectives, and provide a reproducible synthesis that promotes transparency and citizen oversight of the legislative process.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16827v1</guid> |
|
|
<category>physics.soc-ph</category> |
|
|
<category>stat.CO</category> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Juan Sosa, Brayan Riveros, Emma J. Camargo-D\'iaz</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>On the Universal Representation Property of Spiking Neural Networks</title> |
|
|
<link>https://arxiv.org/abs/2512.16872</link> |
|
|
<description>arXiv:2512.16872v1 Announce Type: cross |
|
|
Abstract: Inspired by biology, spiking neural networks (SNNs) process information via discrete spikes over time, offering an energy-efficient alternative to the classical computing paradigm and classical artificial neural networks (ANNs). In this work, we analyze the representational power of SNNs by viewing them as sequence-to-sequence processors of spikes, i.e., systems that transform a stream of input spikes into a stream of output spikes. We establish the universal representation property for a natural class of spike train functions. Our results are fully quantitative, constructive, and near-optimal in the number of required weights and neurons. The analysis reveals that SNNs are particularly well-suited to represent functions with few inputs, low temporal complexity, or compositions of such functions. The latter is of particular interest, as it indicates that deep SNNs can efficiently capture composite functions via a modular design. As an application of our results, we discuss spike train classification. Overall, these results contribute to a rigorous foundation for understanding the capabilities and limitations of spike-based neuromorphic systems.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16872v1</guid> |
|
|
<category>cs.NE</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/publicdomain/zero/1.0/</dc:rights> |
|
|
<dc:creator>Shayan Hundrieser, Philipp Tuchel, Insung Kong, Johannes Schmidt-Hieber</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Learning Confidence Ellipsoids and Applications to Robust Subspace Recovery</title> |
|
|
<link>https://arxiv.org/abs/2512.16875</link> |
|
|
<description>arXiv:2512.16875v1 Announce Type: cross |
|
|
Abstract: We study the problem of finding confidence ellipsoids for an arbitrary distribution in high dimensions. Given samples from a distribution $D$ and a confidence parameter $\alpha$, the goal is to find the smallest volume ellipsoid $E$ which has probability mass $\Pr_{D}[E] \ge 1-\alpha$. Ellipsoids are a highly expressive class of confidence sets as they can capture correlations in the distribution, and can approximate any convex set. This problem has been studied in many different communities. In statistics, this is the classic minimum volume estimator introduced by Rousseeuw as a robust non-parametric estimator of location and scatter. However in high dimensions, it becomes NP-hard to obtain any non-trivial approximation factor in volume when the condition number $\beta$ of the ellipsoid (ratio of the largest to the smallest axis length) goes to $\infty$. This motivates the focus of our paper: can we efficiently find confidence ellipsoids with volume approximation guarantees when compared to ellipsoids of bounded condition number $\beta$? |
|
|
Our main result is a polynomial time algorithm that finds an ellipsoid $E$ whose volume is within a $O(\beta^{\gamma d})$ multiplicative factor of the volume of best $\beta$-conditioned ellipsoid while covering at least $1-O(\alpha/\gamma)$ probability mass for any $\gamma < \alpha$. We complement this with a computational hardness result that shows that such a dependence seems necessary up to constants in the exponent. The algorithm and analysis uses the rich primal-dual structure of the minimum volume enclosing ellipsoid and the geometric Brascamp-Lieb inequality. As a consequence, we obtain the first polynomial time algorithm with approximation guarantees on worst-case instances of the robust subspace recovery problem.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.16875v1</guid> |
|
|
<category>cs.DS</category> |
|
|
<category>cs.LG</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.ML</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Chao Gao, Liren Shan, Vaidehi Srinivas, Aravindan Vijayaraghavan</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Unified Inference on Moment Restrictions with Nuisance Parameters</title> |
|
|
<link>https://arxiv.org/abs/2202.11031</link> |
|
|
<description>arXiv:2202.11031v3 Announce Type: replace |
|
|
Abstract: This paper proposes a simple unified inference approach on moment restrictions in the presence of nuisance parameters. The proposed test is constructed based on a new characterization that avoids the estimation of nuisance parameters and can be broadly applied across diverse settings. Under suitable conditions, the test is shown to be asymptotically size controlled and consistent for both independent and dependent samples. Monte Carlo simulations show that the test performs well in finite samples. Numerical results from the application to conditional moment restriction models with weak instruments demonstrate that the proposed method may improve upon existing approaches in the literature.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2202.11031v3</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>econ.EM</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Xingyu Li, Xiaojun Song, Zhenting Sun</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Theoretical Foundations of Conformal Prediction</title> |
|
|
<link>https://arxiv.org/abs/2411.11824</link> |
|
|
<description>arXiv:2411.11824v4 Announce Type: replace |
|
|
Abstract: This book is about conformal prediction and related inferential techniques that build on permutation tests and exchangeability. These techniques are useful in a diverse array of tasks, including hypothesis testing and providing uncertainty quantification guarantees for machine learning systems. Much of the current interest in conformal prediction is due to its ability to integrate into complex machine learning workflows, solving the problem of forming prediction sets without any assumptions on the form of the data generating distribution. Since contemporary machine learning algorithms have generally proven difficult to analyze directly, conformal prediction's main appeal is its ability to provide formal, finite-sample guarantees when paired with such methods. |
|
|
The goal of this book is to teach the reader about the fundamental technical arguments that arise when researching conformal prediction and related questions in distribution-free inference. Many of these proof strategies, especially the more recent ones, are scattered among research papers, making it difficult for researchers to understand where to look, which results are important, and how exactly the proofs work. We hope to bridge this gap by curating what we believe to be some of the most important results in the literature and presenting their proofs in a unified language, with illustrations, and with an eye towards pedagogy.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2411.11824v4</guid> |
|
|
<category>math.ST</category> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.ML</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Anastasios N. Angelopoulos, Rina Foygel Barber, Stephen Bates</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>An accuracy-runtime trade-off comparison of scalable Gaussian process approximations for spatial data</title> |
|
|
<link>https://arxiv.org/abs/2501.11448</link> |
|
|
<description>arXiv:2501.11448v4 Announce Type: replace |
|
|
Abstract: Gaussian processes (GPs) are flexible, probabilistic, nonparametric models widely used in fields such as spatial statistics and machine learning. A drawback of Gaussian processes is their computational cost, with $O(N^3)$ time and $O(N^2)$ memory complexity, which makes them prohibitive for large data sets. Numerous approximation techniques have been proposed to address this limitation. In this work, we systematically compare the accuracy of different Gaussian process approximations with respect to likelihood evaluation, parameter estimation, and prediction, explicitly accounting for the computational time required. We analyze the trade-off between accuracy and runtime on multiple simulated and large-scale real-world data sets and find that Vecchia approximations consistently provide the best accuracy-runtime trade-off across most settings considered.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2501.11448v4</guid> |
|
|
<category>stat.CO</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-sa/4.0/</dc:rights> |
|
|
<dc:creator>Filippo Rambelli, Fabio Sigrist</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Fine-Tuning Discrete Diffusion Models with Policy Gradient Methods</title> |
|
|
<link>https://arxiv.org/abs/2502.01384</link> |
|
|
<description>arXiv:2502.01384v3 Announce Type: replace |
|
|
Abstract: Discrete diffusion models have recently gained significant attention due to their ability to process complex discrete structures for language modeling. However, fine-tuning these models with policy gradient methods, as is commonly done in Reinforcement Learning from Human Feedback (RLHF), remains a challenging task. We propose an efficient, broadly applicable, and theoretically justified policy gradient algorithm, called Score Entropy Policy Optimization (\SEPO), for fine-tuning discrete diffusion models over non-differentiable rewards. Our numerical experiments across several discrete generative tasks demonstrate the scalability and efficiency of our method. Our code is available at https://github.com/ozekri/SEPO.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2502.01384v3</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.CL</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<arxiv:journal_reference>NeurIPS 2025</arxiv:journal_reference> |
|
|
<dc:creator>Oussama Zekri, Nicolas Boull\'e</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Nested subspace learning with flags</title> |
|
|
<link>https://arxiv.org/abs/2502.06022</link> |
|
|
<description>arXiv:2502.06022v2 Announce Type: replace |
|
|
Abstract: Many machine learning methods look for low-dimensional representations of the data. The underlying subspace can be estimated by first choosing a dimension $q$ and then optimizing a certain objective function over the space of $q$-dimensional subspaces (the Grassmannian). Trying different $q$ yields in general non-nested subspaces, which raises an important issue of consistency between the data representations. In this paper, we propose a simple and easily implementable principle to enforce nestedness in subspace learning methods. It consists in lifting Grassmannian optimization criteria to flag manifolds (the space of nested subspaces of increasing dimension) via nested projectors. We apply the flag trick to several classical machine learning methods and show that it successfully addresses the nestedness issue.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2502.06022v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Tom Szwagier, Xavier Pennec</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>An interpretation of the Brownian bridge as a physics-informed prior for the Poisson equation</title> |
|
|
<link>https://arxiv.org/abs/2503.00213</link> |
|
|
<description>arXiv:2503.00213v2 Announce Type: replace |
|
|
Abstract: Many inverse problems require reconstructing physical fields from limited and noisy data while incorporating known governing equations. A growing body of work within probabilistic numerics formalizes such tasks via Bayesian inference in function spaces by assigning a physically meaningful prior to the latent field. In this work, we demonstrate that Brownian bridge Gaussian processes can be viewed as a softly-enforced physics-constrained prior for the Poisson equation. We first show equivalence between the variational problem associated with the Poisson equation and a kernel ridge regression objective. Then, through the connection between Gaussian process regression and kernel methods, we identify a Gaussian process for which the posterior mean function and the minimizer to the variational problem agree, thereby placing this PDE-based regularization within a fully Bayesian framework. This connection allows us to probe different theoretical questions, such as convergence and behavior of inverse problems. We then develop a finite-dimensional representation in function space and prove convergence of the projected prior and resulting posterior in Wasserstein distance. Finally, we connect the method to the important problem of identifying model-form error in applications, providing a diagnostic for model misspecification.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2503.00213v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<category>physics.comp-ph</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Alex Alberts, Ilias Bilionis</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Scalable Krylov Subspace Methods for Generalized Mixed Effects Models with Crossed Random Effects</title> |
|
|
<link>https://arxiv.org/abs/2505.09552</link> |
|
|
<description>arXiv:2505.09552v2 Announce Type: replace |
|
|
Abstract: Mixed-effects models are widely used to model data with hierarchical grouping structures and high-cardinality categorical predictor variables. However, for high-dimensional crossed random effects, sparse Cholesky decompositions, the current standard approach, can become prohibitively slow. In this work, we present Krylov subspace-based methods that address these computational bottlenecks and analyze them both theoretically and empirically. In particular, we derive new results on the convergence and accuracy of the preconditioned stochastic Lanczos quadrature and conjugate gradient methods for mixed-effects models, and we develop scalable methods for calculating predictive variances. In experiments with simulated and real-world data, the proposed methods yield speedups by factors of up to about 10,000 and are numerically more stable than Cholesky-based computations as implemented in state-of-the-art packages such as lme4 and glmmTMB. Our methodology is available in the open-source C++ software library GPBoost, with accompanying high-level Python and R packages.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2505.09552v2</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Pascal K\"undig, Fabio Sigrist</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Bayesian Deep Learning for Discrete Choice</title> |
|
|
<link>https://arxiv.org/abs/2505.18077</link> |
|
|
<description>arXiv:2505.18077v2 Announce Type: replace |
|
|
Abstract: Discrete choice models (DCMs) are used to analyze individual decision-making in contexts such as transportation choices, political elections, and consumer preferences. DCMs play a central role in applied econometrics by enabling inference on key economic variables, such as marginal rates of substitution, rather than focusing solely on predicting choices on new unlabeled data. However, while traditional DCMs offer high interpretability and support for point and interval estimation of economic quantities, these models often underperform in predictive tasks compared to deep learning (DL) models. Despite their predictive advantages, DL models remain largely underutilized in discrete choice due to concerns about their lack of interpretability, unstable parameter estimates, and the absence of established methods for uncertainty quantification. Here, we introduce a deep learning model architecture specifically designed to integrate with approximate Bayesian inference methods, such as Stochastic Gradient Langevin Dynamics (SGLD). Our proposed model collapses to behaviorally informed hypotheses when data is limited, mitigating overfitting and instability in underspecified settings while retaining the flexibility to capture complex nonlinear relationships when sufficient data is available. We demonstrate our approach using SGLD through a Monte Carlo simulation study, evaluating both predictive metrics--such as out-of-sample balanced accuracy--and inferential metrics--such as empirical coverage for marginal rates of substitution interval estimates. Additionally, we present results from two empirical case studies: one using revealed mode choice data in NYC, and the other based on the widely used Swiss train choice stated preference data.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2505.18077v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<category>econ.EM</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Daniel F. Villarraga, Ricardo A. Daziano</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Short-term CO2 emissions forecasting: insight from the Italian electricity market</title> |
|
|
<link>https://arxiv.org/abs/2507.12992</link> |
|
|
<description>arXiv:2507.12992v2 Announce Type: replace |
|
|
Abstract: This study investigates the short-term forecasting of carbon emissions from electricity generation in the Italian power market. Using hourly data from 2021 to 2023, several statistical models and forecast combination methods are evaluated and compared at the national and zonal levels. Four main model classes are considered: (i) linear parametric models, such as seasonal autoregressive integrated moving average and its exogenous variable extension; (ii) functional parametric models, including seasonal functional autoregressive models, with and without exogenous variables; (iii) (semi) non-parametric and possibly non-linear models, notably the generalised additive model (GAM) and TBATS (trigonometric seasonality, Box-Cox transformation, ARMA errors, trend, and seasonality); and (iv) a semi-functional approach based on the K-nearest neighbours. Forecast combinations include simple averaging, the optimal Bates and Granger weighting scheme, and a selection-based strategy that chooses the best model for each hour. The results show that the GAM produces the most accurate forecasts during the daytime hours, while the functional parametric models perform best in the early morning. Among the combination methods, the simple average and the selection-based approaches consistently outperform all individual models. The findings underscore the value of hybrid forecasting frameworks in improving the accuracy and reliability of short-term carbon emissions predictions in power systems. In addition, they highlight the importance of considering zonal specificities when implementing flexible energy demand strategies, as the timing of low-carbon emissions varies between market zones throughout the day.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2507.12992v2</guid> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights> |
|
|
<dc:creator>Pierdomenico Duttilo, Francesco Lisi</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>TACE: A unified Irreducible Cartesian Tensor Framework for Atomistic Machine Learning</title> |
|
|
<link>https://arxiv.org/abs/2509.14961</link> |
|
|
<description>arXiv:2509.14961v2 Announce Type: replace |
|
|
Abstract: Here, we introduce the Tensor Atomic Cluster Expansion (TACE), a unified framework formulated entirely in Cartesian space, enabling systematic and consistent prediction of arbitrary structure-dependent tensorial properties. TACE achieves this by decomposing atomic environments into a complete hierarchy of irreducible Cartesian tensors, ensuring symmetry-consistent representations that naturally encode invariance and equivariance constraints. Beyond geometry, TACE incorporates universal embeddings that flexibly integrate diverse attributes including computational levels, charges, magnetic moments and field perturbations. This allows explicit control over external invariants and equivariants in the prediction process. Long-range interactions are also accurately described through the Latent Ewald Summation module within the short-range approximation, providing a rigorous yet computationally efficient treatment of electrostatic and dispersion effects. We demonstrate that TACE attains accuracy, stability, and efficiency on par with or surpassing leading equivariant frameworks across finite molecules and extended materials. This includes in-domain and out-of-domain benchmarks, spectra, Hessian, external-field responses, charged and magnetic systems, multi-fidelity training, heterogeneous catalysis, and even superior performance within the uMLIP benchmark. Crucially, TACE bridges scalar and tensorial modeling and establishes a Cartesian-space paradigm that unifies and extends beyond the design space of spherical-tensor-based methods. This work lays the foundation for a new generation of universal atomistic machine learning models capable of systematically capturing the rich interplay of geometry, fields and material properties within a single coherent framework.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2509.14961v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cond-mat.mtrl-sci</category> |
|
|
<category>cs.LG</category> |
|
|
<category>physics.chem-ph</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-sa/4.0/</dc:rights> |
|
|
<dc:creator>Zemin Xu, Wenbo Xie, Daiqian Xie, P. Hu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>The Role of Congeniality in Multiple Imputation for Doubly Robust Causal Estimation</title> |
|
|
<link>https://arxiv.org/abs/2510.11633</link> |
|
|
<description>arXiv:2510.11633v2 Announce Type: replace |
|
|
Abstract: This paper provides clear and practical guidance on the specification of imputation models when multiple imputation is used in conjunction with doubly robust estimation methods for causal inference. Through theoretical arguments and targeted simulations, we demonstrate that if a confounder has missing data, the corresponding imputation model must include all variables appearing in either the propensity score model or the outcome model, in addition to both the exposure and the outcome, and that these variables must enter the imputation model in the same functional form as in the final analysis. Violating these conditions can lead to biased treatment effect estimates, even when both components of the doubly robust estimator are correctly specified. We present a mathematical framework for doubly robust estimation combined with multiple imputation, establish the theoretical requirements for proper imputation in this setting, and demonstrate the consequences of misspecification through simulation. Based on these findings, we offer concrete recommendations to ensure valid inference when using multiple imputation with doubly robust methods in applied causal analyses.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2510.11633v2</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Lucy D'Agostino McGowan</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Cartan meets Cram\'er-Rao</title> |
|
|
<link>https://arxiv.org/abs/2511.15612</link> |
|
|
<description>arXiv:2511.15612v2 Announce Type: replace |
|
|
Abstract: A Cartan-geometric, jet bundle formulation of curvature-aware variance bounds in parametric statistical estimation is developed. Building on our earlier extrinsic Hilbert space approach to the Cram\'er-Rao and Bhattacharyya-type inequalities, we show that the curvature corrections induced by the square root embedding of a statistical model admit a canonical intrinsic interpretation via jet geometry and Cartan's prolongation theory. |
|
|
For a scalar-parameter family with square root map $s_\theta=\sqrt{f(\cdot;\theta)}\in L^2(\mu)$, we regard $s_\theta$ as a section of the statistical bundle $E=\Theta\times L^2(\mu)$ and study its finite-order prolongations. We point out that the classical algebraic efficiency condition--that the estimator residual $(T-\theta)s_\theta$ lies in the span of derivatives of $s_\theta$ up to order $m$--is equivalent to the existence of a linear ordinary differential equation (ODE) of order $m$ satisfied by the square root map. Geometrically, this means the prolonged section lies in an ODE-defined submanifold of the jet bundle and is an integral curve of the restricted Cartan vector field. |
|
|
The obstruction to such finite-order integrability is identified with the vertical component of the canonical Ehresmann connection on the jet tower, which coincides with the curvature correction term in variance bounds. This establishes a direct correspondence between algebraic projection conditions in $L^2(\mu)$ and intrinsic holonomy properties of statistical sections, yielding a unified geometric interpretation of higher-order information inequalities.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2511.15612v2</guid> |
|
|
<category>math.ST</category> |
|
|
<category>cs.IT</category> |
|
|
<category>eess.SP</category> |
|
|
<category>math.DG</category> |
|
|
<category>math.IT</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Sunder Ram Krishnan</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Understanding Overparametrization in Survival Models through Interpolation</title> |
|
|
<link>https://arxiv.org/abs/2512.12463</link> |
|
|
<description>arXiv:2512.12463v2 Announce Type: replace |
|
|
Abstract: Classical statistical learning theory predicts a U-shaped relationship between test loss and model capacity, driven by the bias-variance trade-off. Recent advances in modern machine learning have revealed a more complex pattern, \textit{double-descent}, in which test loss, after peaking near the interpolation threshold, decreases again as model capacity continues to grow. While this behavior has been extensively analyzed in regression and classification, its manifestation in survival analysis remains unexplored. This study investigates overparametrization in four representative survival models: DeepSurv, PC-Hazard, Nnet-Survival, and N-MTLR. We rigorously define \textit{interpolation} and \textit{finite-norm interpolation}, two key characteristics of loss-based models to understand \textit{double-descent}. We then show the existence (or absence) of \textit{(finite-norm) interpolation} of all four models. Our findings clarify how likelihood-based losses and model implementation jointly determine the feasibility of \textit{interpolation} and show that overparametrization should not be regarded as benign for survival models. All theoretical results are supported by numerical experiments that highlight the distinct generalization behaviors of survival models.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.12463v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Yin Liu, Jianwen Cai, Didong Li</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Optimal Decision Rules when Payoffs are Partially Identified</title> |
|
|
<link>https://arxiv.org/abs/2204.11748</link> |
|
|
<description>arXiv:2204.11748v4 Announce Type: replace-cross |
|
|
Abstract: We derive asymptotically optimal statistical decision rules for discrete choice problems when payoffs depend on a partially-identified parameter $\theta$ and the decision maker can use a point-identified parameter $\mu$ to deduce restrictions on $\theta$. Examples include treatment choice under partial identification and pricing with rich unobserved heterogeneity. Our notion of optimality combines a minimax approach to handle the ambiguity from partial identification of $\theta$ given $\mu$ with an average risk minimization approach for $\mu$. We show how to implement optimal decision rules using the bootstrap and (quasi-)Bayesian methods in both parametric and semiparametric settings. We provide detailed applications to treatment choice and optimal pricing. Our asymptotic approach is well suited for realistic empirical settings in which the derivation of finite-sample optimal rules is intractable.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2204.11748v4</guid> |
|
|
<category>econ.EM</category> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Timothy Christensen, Hyungsik Roger Moon, Frank Schorfheide</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Online Bandits with (Biased) Offline Data: Adaptive Learning under Distribution Mismatch</title> |
|
|
<link>https://arxiv.org/abs/2405.02594</link> |
|
|
<description>arXiv:2405.02594v2 Announce Type: replace-cross |
|
|
Abstract: Traditional online learning models are typically initialized from scratch. By contrast, contemporary real-world applications often have access to historical datasets that can potentially enhanced the online learning processes. We study how offline data can be leveraged to facilitate online learning in stochastic multi-armed bandits and combinatorial bandits. In our study, the probability distributions that govern the offline data and the online rewards can be different. We first show that, without a non-trivial upper bound on their difference, no non-anticipatory policy can outperform the classical Upper Confidence Bound (UCB) policy, even with the access to offline data. In complement, we propose an online policy MIN-UCB for multi-armed bandits. MIN-UCB outperforms the UCB when such an upper bound is available. MIN-UCB adaptively chooses to utilize the offline data when they are deemed informative, and to ignore them otherwise. We establish that MIN-UCB achieves tight regret bounds, in both instance independent and dependent settings. We generalize our approach to the combinatorial bandit setting by introducing MIN-COMB-UCB, and we provide corresponding instance dependent and instance independent regret bounds. We illustrate how various factors, such as the biases and the size of offline datasets, affect the utility of offline data in online learning. We discuss several applications and conduct numerical experiments to validate our findings.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2405.02594v2</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Wang Chi Cheung, Lixing Lyu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Bandits with Preference Feedback: A Stackelberg Game Perspective</title> |
|
|
<link>https://arxiv.org/abs/2406.16745</link> |
|
|
<description>arXiv:2406.16745v3 Announce Type: replace-cross |
|
|
Abstract: Bandits with preference feedback present a powerful tool for optimizing unknown target functions when only pairwise comparisons are allowed instead of direct value queries. This model allows for incorporating human feedback into online inference and optimization and has been employed in systems for fine-tuning large language models. The problem is well understood in simplified settings with linear target functions or over finite small domains that limit practical interest. Taking the next step, we consider infinite domains and nonlinear (kernelized) rewards. In this setting, selecting a pair of actions is quite challenging and requires balancing exploration and exploitation at two levels: within the pair, and along the iterations of the algorithm. We propose MAXMINLCB, which emulates this trade-off as a zero-sum Stackelberg game, and chooses action pairs that are informative and yield favorable rewards. MAXMINLCB consistently outperforms existing algorithms and satisfies an anytime-valid rate-optimal regret guarantee. This is due to our novel preference-based confidence sequences for kernelized logistic estimators.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2406.16745v3</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.GT</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-sa/4.0/</dc:rights> |
|
|
<arxiv:DOI>10.52202/079017-0383</arxiv:DOI> |
|
|
<dc:creator>Barna P\'asztor, Parnian Kassraie, Andreas Krause</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Unsupervised discovery of the shared and private geometry in multi-view data</title> |
|
|
<link>https://arxiv.org/abs/2408.12091</link> |
|
|
<description>arXiv:2408.12091v3 Announce Type: replace-cross |
|
|
Abstract: Studying complex real-world phenomena often involves data from multiple views (e.g. sensor modalities or brain regions), each capturing different aspects of the underlying system. Within neuroscience, there is growing interest in large-scale simultaneous recordings across multiple brain regions. Understanding the relationship between views (e.g., the neural activity in each region recorded) can reveal fundamental insights into each view and the system as a whole. However, existing methods to characterize such relationships lack the expressivity required to capture nonlinear relationships, describe only shared sources of variance, or discard geometric information that is crucial to drawing insights from data. Here, we present SPLICE: a neural network-based method that infers disentangled, interpretable representations of private and shared latent variables from paired samples of high-dimensional views. Compared to competing methods, we demonstrate that SPLICE 1) disentangles shared and private representations more effectively, 2) yields more interpretable representations by preserving geometry, and 3) is more robust to incorrect a priori estimates of latent dimensionality. We propose our approach as a general-purpose method for finding succinct and interpretable descriptions of paired data sets in terms of disentangled shared and private latent variables.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2408.12091v3</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>q-bio.NC</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Sai Koukuntla, Joshua B. Julian, Jesse C. Kaminsky, Manuel Schottdorf, David W. Tank, Carlos D. Brody, Adam S. Charles</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Provable optimal transport with transformers: The essence of depth and prompt engineering</title> |
|
|
<link>https://arxiv.org/abs/2410.19931</link> |
|
|
<description>arXiv:2410.19931v3 Announce Type: replace-cross |
|
|
Abstract: Despite their empirical success, the internal mechanism by which transformer models align tokens during language processing remains poorly understood. This paper provides a mechanistic and theoretical explanation of token alignment in LLMs. We first present empirical evidences showing that, in machine translation, attention weights progressively align translated word pairs across layers, closely approximating Optimal Transport (OT) between word embeddings. Building on this observation, we prove that softmax self-attention layers can simulate gradient descent on the dual of the entropy-regularized OT problem, providing a theoretical foundation for the alignment. Our analysis yields a constructive convergence bound showing that transformer depth controls OT approximation accuracy. A direct implication is that standard transformers can sort lists of varying lengths without any parameter adjustment, up to an error term vanishing with transformers depth.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2410.19931v3</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>math.OC</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Hadi Daneshmand</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Artificial Intelligence for Microbiology and Microbiome Research</title> |
|
|
<link>https://arxiv.org/abs/2411.01098</link> |
|
|
<description>arXiv:2411.01098v2 Announce Type: replace-cross |
|
|
Abstract: Advancements in artificial intelligence (AI) have transformed many scientific fields, with microbiology and microbiome research now experiencing significant breakthroughs through machine learning applications. This review provides a comprehensive overview of AI-driven approaches tailored for microbiology and microbiome studies, emphasizing both technical advancements and biological insights. We begin with an introduction to foundational AI techniques, including primary machine learning paradigms and various deep learning architectures, and offer guidance on choosing between traditional machine learning and sophisticated deep learning methods based on specific research goals. The primary section on application scenarios spans diverse research areas, from taxonomic profiling, functional annotation \& prediction, microbe-X interactions, microbial ecology, metabolic modeling, precision nutrition, clinical microbiology, to prevention \& therapeutics. Finally, we discuss challenges in this field and highlight some recent breakthroughs. Together, this review underscores AI's transformative role in microbiology and microbiome research, paving the way for innovative methodologies and applications that enhance our understanding of microbial life and its impact on our planet and our health.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2411.01098v2</guid> |
|
|
<category>q-bio.QM</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Xu-Wen Wang, Tong Wang, Yang-Yu Liu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Proactive Model Adaptation Against Concept Drift for Online Time Series Forecasting</title> |
|
|
<link>https://arxiv.org/abs/2412.08435</link> |
|
|
<description>arXiv:2412.08435v5 Announce Type: replace-cross |
|
|
Abstract: Time series forecasting always faces the challenge of concept drift, where data distributions evolve over time, leading to a decline in forecast model performance. Existing solutions are based on online learning, which continually organize recent time series observations as new training samples and update model parameters according to the forecasting feedback on recent data. However, they overlook a critical issue: obtaining ground-truth future values of each sample should be delayed until after the forecast horizon. This delay creates a temporal gap between the training samples and the test sample. Our empirical analysis reveals that the gap can introduce concept drift, causing forecast models to adapt to outdated concepts. In this paper, we present Proceed, a novel proactive model adaptation framework for online time series forecasting. Proceed first estimates the concept drift between the recently used training samples and the current test sample. It then employs an adaptation generator to efficiently translate the estimated drift into parameter adjustments, proactively adapting the model to the test sample. To enhance the generalization capability of the framework, Proceed is trained on synthetic diverse concept drifts. Extensive experiments on five real-world datasets across various forecast models demonstrate that Proceed brings more performance improvements than the state-of-the-art online learning methods, significantly facilitating forecast models' resilience against concept drifts. Code is available at https://github.com/SJTU-DMTai/OnlineTSF.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2412.08435v5</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.CE</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<arxiv:DOI>10.1145/3690624.3709210</arxiv:DOI> |
|
|
<dc:creator>Lifan Zhao, Yanyan Shen</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Closed-Form Feedback-Free Learning with Forward Projection</title> |
|
|
<link>https://arxiv.org/abs/2501.16476</link> |
|
|
<description>arXiv:2501.16476v4 Announce Type: replace-cross |
|
|
Abstract: State-of-the-art backpropagation-free learning methods employ local error feedback to direct iterative optimisation via gradient descent. Here, we examine the more restrictive setting where retrograde communication from neuronal outputs is unavailable for pre-synaptic weight optimisation. We propose Forward Projection (FP), a randomised closed-form training method requiring only a single forward pass over the dataset without retrograde communication. FP generates target values for pre-activation membrane potentials through randomised nonlinear projections of pre-synaptic inputs and labels. Local loss functions are optimised using closed-form regression without feedback from downstream layers. A key advantage is interpretability: membrane potentials in FP-trained networks encode information interpretable layer-wise as label predictions. Across several biomedical datasets, FP achieves generalisation comparable to gradient descent-based local learning methods while requiring only a single forward propagation step, yielding significant training speedup. In few-shot learning tasks, FP produces more generalisable models than backpropagation-optimised alternatives, with local interpretation functions successfully identifying clinically salient diagnostic features.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2501.16476v4</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Robert O'Shea, Bipin Rajendran</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Clustered Network Connectedness: A New Measurement Framework with Application to Global Equity Markets</title> |
|
|
<link>https://arxiv.org/abs/2502.15458</link> |
|
|
<description>arXiv:2502.15458v2 Announce Type: replace-cross |
|
|
Abstract: Network connections, both across and within markets, are central in countless economic contexts. In recent decades, a large literature has developed and applied flexible methods for measuring network connectedness and its evolution, based on variance decompositions from vector autoregressions (VARs), as in Diebold and Yilmaz (2014). Those VARs are, however, typically identified using full orthogonalization (Sims, 1980), or no orthogonalization (Koop, Pesaran and Potter, 1996; Pesaran and Shin, 1998), which, although useful, are special and extreme cases of a more general framework that we develop in this paper. In particular, we allow network nodes to be connected in ``clusters", such as asset classes, industries, regions, etc., where shocks are orthogonal across clusters (Sims style orthogonalized identification) but correlated within clusters (Koop-Pesaran-Potter-Shin style generalized identification), so that the ordering of network nodes is relevant across clusters but irrelevant within clusters. After developing the clustered connectedness framework, we apply it in a detailed empirical exploration of sixteen country equity markets spanning three global regions.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2502.15458v2</guid> |
|
|
<category>econ.EM</category> |
|
|
<category>q-fin.ST</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights> |
|
|
<dc:creator>Bastien Buchwalter, Francis X. Diebold, Kamil Yilmaz</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Benchmarking field-level cosmological inference from galaxy redshift surveys</title> |
|
|
<link>https://arxiv.org/abs/2504.20130</link> |
|
|
<description>arXiv:2504.20130v2 Announce Type: replace-cross |
|
|
Abstract: Field-level inference has emerged as a promising framework to fully harness the cosmological information encoded in next-generation galaxy surveys. It involves performing Bayesian inference to jointly estimate the cosmological parameters and the initial conditions of the cosmic field, directly from the observed galaxy density field. Yet, the scalability and efficiency of sampling algorithms for field-level inference of large-scale surveys remain unclear. To address this, we introduce a standardized benchmark using a fast and differentiable simulator for the galaxy density field based on $\texttt{JaxPM}$. We evaluate a range of sampling methods, including standard Hamiltonian Monte Carlo (HMC), No-U-Turn Sampler (NUTS) without and within a Gibbs scheme, and both adjusted and unadjusted microcanonical samplers (MAMS and MCLMC). These methods are compared based on their efficiency, in particular the number of model evaluations required per effective posterior sample. Our findings emphasize the importance of carefully preconditioning latent variables and demonstrate the significant advantage of (unadjusted) MCLMC for scaling to $\geq 10^6$-dimensional problems. We find that MCLMC outperforms adjusted samplers by over an order-of-magnitude, with a mild scaling with the dimension of our inference problem. This benchmark, along with the associated publicly available code, is intended to serve as a basis for the evaluation of future field-level sampling strategies. The code is readily open-sourced at https://github.com/hsimonfroy/benchmark-field-level</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2504.20130v2</guid> |
|
|
<category>astro-ph.CO</category> |
|
|
<category>astro-ph.IM</category> |
|
|
<category>stat.CO</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<arxiv:DOI>10.1088/1475-7516/2025/12/039</arxiv:DOI> |
|
|
<arxiv:journal_reference>JCAP12(2025)039</arxiv:journal_reference> |
|
|
<dc:creator>Hugo Simon, Fran\c{c}ois Lanusse, Arnaud de Mattia</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Conflicting Biases at the Edge of Stability: Norm versus Sharpness Regularization</title> |
|
|
<link>https://arxiv.org/abs/2505.21423</link> |
|
|
<description>arXiv:2505.21423v2 Announce Type: replace-cross |
|
|
Abstract: A widely believed explanation for the remarkable generalization capacities of overparameterized neural networks is that the optimization algorithms used for training induce an implicit bias towards benign solutions. To grasp this theoretically, recent works examine gradient descent and its variants in simplified training settings, often assuming vanishing learning rates. These studies reveal various forms of implicit regularization, such as $\ell_1$-norm minimizing parameters in regression and max-margin solutions in classification. Concurrently, empirical findings show that moderate to large learning rates exceeding standard stability thresholds lead to faster, albeit oscillatory, convergence in the so-called Edge-of-Stability regime, and induce an implicit bias towards minima of low sharpness (norm of training loss Hessian). In this work, we argue that a comprehensive understanding of the generalization performance of gradient descent requires analyzing the interaction between these various forms of implicit regularization. We empirically demonstrate that the learning rate balances between low parameter norm and low sharpness of the trained model. We furthermore prove for diagonal linear networks trained on a simple regression task that neither implicit bias alone minimizes the generalization error. These findings demonstrate that focusing on a single implicit bias is insufficient to explain good generalization, and they motivate a broader view of implicit regularization that captures the dynamic trade-off between norm and sharpness induced by non-negligible learning rates.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2505.21423v2</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Maria Matveev, Vit Fojtik, Hung-Hsu Chou, Gitta Kutyniok, Johannes Maly</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Do Neural Networks Need Gradient Descent to Generalize? A Theoretical Study</title> |
|
|
<link>https://arxiv.org/abs/2506.03931</link> |
|
|
<description>arXiv:2506.03931v2 Announce Type: replace-cross |
|
|
Abstract: Conventional wisdom attributes the mysterious generalization abilities of overparameterized neural networks to gradient descent (and its variants). The recent volume hypothesis challenges this view: it posits that these generalization abilities persist even when gradient descent is replaced by Guess & Check (G&C), i.e., by drawing weight settings until one that fits the training data is found. The validity of the volume hypothesis for wide and deep neural networks remains an open question. In this paper, we theoretically investigate this question for matrix factorization (with linear and non-linear activation)--a common testbed in neural network theory. We first prove that generalization under G&C deteriorates with increasing width, establishing what is, to our knowledge, the first case where G&C is provably inferior to gradient descent. Conversely, we prove that generalization under G&C improves with increasing depth, revealing a stark contrast between wide and deep networks, which we further validate empirically. These findings suggest that even in simple settings, there may not be a simple answer to the question of whether neural networks need gradient descent to generalize well.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2506.03931v2</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Yotam Alexander, Yonatan Slutzky, Yuval Ran-Milo, Nadav Cohen</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>InsurTech innovation using natural language processing</title> |
|
|
<link>https://arxiv.org/abs/2507.21112</link> |
|
|
<description>arXiv:2507.21112v3 Announce Type: replace-cross |
|
|
Abstract: With the rapid rise of InsurTech, traditional insurance companies are increasingly exploring alternative data sources and advanced technologies to sustain their competitive edge. This paper provides both a conceptual overview and practical case studies of natural language processing (NLP) and its emerging applications within insurance operations, focusing on transforming raw, unstructured text into structured data suitable for actuarial analysis and decision-making. Leveraging real-world alternative data provided by an InsurTech industry partner that enriches traditional insurance data sources, we apply various NLP techniques to demonstrate feature de-biasing, feature compression, and industry classification in the commercial insurance context. These enriched, text-derived insights not only add to and refine traditional rating factors for commercial insurance pricing but also offer novel perspectives for assessing underlying risk by introducing novel industry classification techniques. Through these demonstrations, we show that NLP is not merely a supplementary tool but a foundational element of modern, data-driven insurance analytics.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2507.21112v3</guid> |
|
|
<category>cs.CL</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Panyi Dong, Zhiyu Quan</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>AuON: A Linear-time Alternative to Orthogonal Momentum Updates</title> |
|
|
<link>https://arxiv.org/abs/2509.24320</link> |
|
|
<description>arXiv:2509.24320v4 Announce Type: replace-cross |
|
|
Abstract: Orthogonal momentum gradient updates have emerged to overcome the limitations of vector-based optimizers like Adam. The vector-based optimizer Adam suffers from high memory costs and ill-conditioned momentum gradient updates. However, traditional Orthogonal momentum approaches, such as SVD/QR decomposition, suffer from high computational and memory costs and underperform compared to well-tuned SGD with momentum. Recent advances, such as Muon, improve efficiency by applying momentum before orthogonalization and approximate orthogonal matrices via Newton-Schulz iterations, which gives better GPU utilization, active high TFLOPS, and reduces memory usage by up to 3x. Nevertheless, Muon(Vanilla) suffers from exploding attention logits and has cubic computation complexity. In this paper, we deep dive into orthogonal momentum gradient updates to find the main properties that help Muon achieve remarkable performance. We propose AuON (Alternative Unit-norm momentum updates by Normalized nonlinear scaling), a linear-time optimizer that achieves strong performance without approximate orthogonal matrices, while preserving structural alignment and reconditioning ill-posed updates. AuON has an automatic "emergency brake" to handle exploding attention logits. We further introduce a hybrid variant, Hybrid-AuON, that applies the linear transformations with Newton-Schulz iterations, which outperforms Muon in the language modeling tasks. Code is available at: https://github.com/ryyzn9/AuON</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2509.24320v4</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-sa/4.0/</dc:rights> |
|
|
<dc:creator>Dipan Maity</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Bayesian model selection and misspecification testing in imaging inverse problems only from noisy and partial measurements</title> |
|
|
<link>https://arxiv.org/abs/2510.27663</link> |
|
|
<description>arXiv:2510.27663v2 Announce Type: replace-cross |
|
|
Abstract: Modern imaging techniques heavily rely on Bayesian statistical models to address difficult image reconstruction and restoration tasks. This paper addresses the objective evaluation of such models in settings where ground truth is unavailable, with a focus on model selection and misspecification diagnosis. Existing unsupervised model evaluation methods are often unsuitable for computational imaging due to their high computational cost and incompatibility with modern image priors defined implicitly via machine learning models. We herein propose a general methodology for unsupervised model selection and misspecification detection in Bayesian imaging sciences, based on a novel combination of Bayesian cross-validation and data fission, a randomized measurement splitting technique. The approach is compatible with any Bayesian imaging sampler, including diffusion and plug-and-play samplers. We demonstrate the methodology through experiments involving various scoring rules and types of model misspecification, where we achieve excellent selection and detection accuracy with a low computational cost.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2510.27663v2</guid> |
|
|
<category>eess.IV</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Tom Sprunck, Marcelo Pereyra, Tobias Liaudat</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Forgetting is Everywhere</title> |
|
|
<link>https://arxiv.org/abs/2511.04666</link> |
|
|
<description>arXiv:2511.04666v2 Announce Type: replace-cross |
|
|
Abstract: A fundamental challenge in developing general learning algorithms is their tendency to forget past knowledge when adapting to new data. Addressing this problem requires a principled understanding of forgetting; yet, despite decades of study, no unified definition has emerged that provides insights into the underlying dynamics of learning. We propose an algorithm- and task-agnostic theory that characterises forgetting as a lack of self-consistency in a learner's predictive distribution over future experiences, manifesting as a loss of predictive information. Our theory naturally yields a general measure of an algorithm's propensity to forget and shows that Bayesian learners are capable of adapting without forgetting. To validate the theory, we design a comprehensive set of experiments that span classification, regression, generative modelling, and reinforcement learning. We empirically demonstrate how forgetting is present across all deep learning settings and plays a significant role in determining learning efficiency. Together, these results establish a principled understanding of forgetting and lay the foundation for analysing and improving the information retention capabilities of general learning algorithms.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2511.04666v2</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Ben Sanati, Thomas L. Lee, Trevor McInroe, Aidan Scannell, Nikolay Malkin, David Abel, Amos Storkey</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Identification-aware Markov chain Monte Carlo</title> |
|
|
<link>https://arxiv.org/abs/2511.12847</link> |
|
|
<description>arXiv:2511.12847v2 Announce Type: replace-cross |
|
|
Abstract: Leaving posterior sensitivity concerns aside, non-identifiability of the parameters does not raise a difficulty for Bayesian inference as far as the posterior is proper, but multi-modality or flat regions of the posterior induced by the lack of identification leaves a challenge for modern Bayesian computation. Sampling methods often struggle with slow or non-convergence when dealing with multiple modes or flat regions of the target distributions. This paper develops a novel Markov chain Monte Carlo (MCMC) approach for non-identified models, leveraging the knowledge of observationally equivalent sets of parameters, and highlights an important role that identification plays in modern Bayesian analysis.We show that our identification-aware proposal eliminates mode entrapment, achieving a convergence rate uniformly bounded away from zero, in sharp contrast to the exponentially decaying rates characterizing standard Random Walk Metropolis and Hamiltonian Monte Carlo. Simulation studies show its superior performance compared to other popular computational methods including Hamiltonian Monte Carlo and sequential Monte Carlo. We also demonstrate that our method uncovers non-trivial modes in the target distribution in a structural vector moving-average (SVMA) application.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2511.12847v2</guid> |
|
|
<category>econ.EM</category> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Toru Kitagawa, Yizhou Kuang</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>OceanForecastBench: A Benchmark Dataset for Data-Driven Global Ocean Forecasting</title> |
|
|
<link>https://arxiv.org/abs/2511.18732</link> |
|
|
<description>arXiv:2511.18732v2 Announce Type: replace-cross |
|
|
Abstract: Global ocean forecasting aims to predict key ocean variables such as temperature, salinity, and currents, which is essential for understanding and describing oceanic phenomena. In recent years, data-driven deep learning-based ocean forecast models, such as XiHe, WenHai, LangYa and AI-GOMS, have demonstrated significant potential in capturing complex ocean dynamics and improving forecasting efficiency. Despite these advancements, the absence of open-source, standardized benchmarks has led to inconsistent data usage and evaluation methods. This gap hinders efficient model development, impedes fair performance comparison, and constrains interdisciplinary collaboration. To address this challenge, we propose OceanForecastBench, a benchmark offering three core contributions: (1) A high-quality global ocean reanalysis data over 28 years for model training, including 4 ocean variables across 23 depth levels and 4 sea surface variables. (2) A high-reliability satellite and in-situ observations for model evaluation, covering approximately 100 million locations in the global ocean. (3) An evaluation pipeline and a comprehensive benchmark with 6 typical baseline models, leveraging observations to evaluate model performance from multiple perspectives. OceanForecastBench represents the most comprehensive benchmarking framework currently available for data-driven ocean forecasting, offering an open-source platform for model development, evaluation, and comparison. The dataset and code are publicly available at: https://github.com/Ocean-Intelligent-Forecasting/OceanForecastBench.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2511.18732v2</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Haoming Jia, Yi Han, Xiang Wang, Huizan Wang, Wei Wu, Jianming Zheng, Peikun Xiao</dc:creator> |
|
|
</item> |
|
|
</channel> |
|
|
</rss> |
|
|
|