Archive for OWABI
OWABI⁷, 29 January 2026: Sequential Neural Score Estimation (11am UK time)
Posted in Books, Statistics, University life with tags ABC, approximate Bayesian inference, diffusion model, generalised Bayesian inference, generative model, OWABI, score function, sequential Monte Carlo, simulation-based inference, University of Warwick, webinar on January 21, 2026 by xi'anApproximate Bayesian Computation with Statistical Distances for Model Selection [OWABI, 27 Nov]
Posted in Books, Statistics, University life with tags ABC model selection, Approximate Bayesian computation, approximate Bayesian inference, Bayesian inference, curse of dimensionality, information loss, intractable likelihood, One World Approximate Bayesian Inference Seminar, OWABI, simulation, simulation-based inference, summary statistics, toad, University of Warwick, webinar on November 17, 2025 by xi'an
The next OWABI seminar is delivered by Clara Grazian (University of Sidney), who will talk about “Approximate Bayesian Computation with Statistical Distances for Model Selection” on Thursday 27 November at 11am UK time:
Abstract: Model selection is a key task in statistics, playing a critical role across various scientific disciplines. While no model can fully capture the complexities of a real-world data-generating process, identifying the model that best approximates it can provide valuable insights. Bayesian statistics offers a flexible framework for model selection by updating prior beliefs as new data becomes available, allowing for ongoing refinement of candidate models. This is typically achieved by calculating posterior probabilities, which quantify the support for each model given the observed data. However, in cases where likelihood functions are intractable, exact computation of these posterior probabilities becomes infeasible. Approximate Bayesian computation (ABC) has emerged as a likelihood-free method and it is traditionally used with summary statistics to reduce data dimensionality, however this often results in information loss difficult to quantify, particularly in model selection contexts. Recent advancements propose the use of full data approaches based on statistical distances, offering a promising alternative that bypasses the need for handcrafted summary statistics and can yield posterior approximations that more closely reflect the true posterior under suitable conditions. Despite these developments, full data ABC approaches have not yet been widely applied to model selection problems. This paper seeks to address this gap by investigating the performance of ABC with statistical distances in model selection. Through simulation studies and an application to toad movement models, this work explores whether full data approaches can overcome the limitations of summary statistic-based ABC for model choice.
Keywords: model choice, distance metrics, full data approaches
Reference: C. Grazian, Approximate Bayesian Computation with Statistical Distances for Model Selection, preprint at ArXiv:2410.21603, 2025
OWABI Season VII
Posted in Statistics with tags ABC, approximate Bayesian inference, Bayesian inference, Bayesian neural networks, multi-level Monte Carlo, multifidelity, multilevel Monte Carlo, neural SBI, OWABI, simulation-based inference, UCL, University College London, University of Warwick, webinar on October 17, 2025 by xi'an
A new season of the One World Approximate Bayesian Inference (OWABI) Seminar is about to start!The 1st OWABI talk of the Season will be given by François-Xavier Briol (University College London). who will talk about “Multilevel neural simulation-based inference” on Thursday the 30th October at 11am UK time.AbstractNeural simulation-based inference (SBI) is a popular set of methods for Bayesian inference when models are only available in the form of a simulator. These methods are widely used in the sciences and engineering, where writing down a likelihood can be significantly more challenging than constructing a simulator. However, the performance of neural SBI can suffer when simulators are computationally expensive, thereby limiting the number of simulations that can be performed. In this paper, we propose a novel approach to neural SBI which leverages multilevel Monte Carlo techniques for settings where several simulators of varying cost and fidelity are available. We demonstrate through both theoretical analysis and extensive experiments that our method can significantly enhance the accuracy of SBI methods given a fixed computational budget.Keywords: Multifidelity, neural SBI, multi-level Monte Carlomultilevel Monte Carlo
BayesComp 2025.3
Posted in pictures, Running, Statistics, Travel, University life with tags abalone, ABC, ABC model selection, BayesComp 2025, Bayesian GANs, Bayesian lasso, Bayesian neural networks, Bayesian predictive, Bayesian robustness, Bayesian semi-parametrics, changepoint detection, equator, Gibbs posterior, Henri Poincaré, horseshoe prior, humidity, Kingman's coalescent, Langevin MCMC algorithm, local regression, National University Singapore, NUS, Ocean, OWABI, particle filters, PDMP, plenary speaker, power posterior, random kernel MCMC, RATP, RER, safe Bayes, shrinkage, shrinkage estimation, Singapore, SNCF, splines, Stein divergence, stochastic gradient MCMC, Swendsen-Wang algorithm, Sylvia Frühwirth-Schnatter, Szechuan cuisine, treadmill, University of Warwick, Wasserstein distance, William Strawderman, WU Wirtschaftsuniversität Wien, zigzag algorithm on June 20, 2025 by xi'an
The second day of the conference started with a cooler and less humid weather (although this did not last!), although my brain felt a wee bit foggy from a lack of sleep (and I almost crashed while running on the hotel treadmill, at 14.5km/h!), and the plenary talk of my friend of many years Sylvia Früwirth-Schnatter on horseshoe priors and time-varying time series (à la West). With a nice closed-form representation involving hypergeometric functions of the second kind (my favourite!), with the addition of a triple-Gamma prior. Sylvia stressed on the enormous impact of the prior choice on change-point detection, which was already the point in the original horseshoe paper (as opposed to George’s Lasso prior). Without incorporating any specific modelling on potential change-point, fair enough given that the parameter is moving with time, unhindered. Her MCMC choices involved discrete parameters with Negative Binomial and Poisson parameters, allowing for partially integrated or collapsed solutions. Possibly further improved by Swendsen-Wang steps.

I then attended the (advanced) Langevin session after agonising upon my choice for a wealth of options! Sam Power presented a talk linking simulation with optimisation targets, over measure spaces. With Wasserstein gradient flow algorithms that resemble Langevin algorithms once discretised by a particle system. (A natural resolution producing a somewhat unnatural form of measure estimator since made of Dirac masses, from which very little can be learned.) Then [my Warwick colleague & coauthor] Any Wang on underdamped Langevin diffusions. when Poincaré‘s inequality fails, but convergence (in total variation) still occurs. Followed by Peter Whalley on splitting methods (where random hypergeometric subsampling dominates Robbins-Monro) and stochastic gradient algorithms, in a connected (to the previous talks) way since involving underdamped aspects. (With a personal discovery of Polyak’s heavy ball method.)

The afternoon session saw me facing a terrible dilemma with three close friends talking at the same time! Eventually opting for PDMPs, over simulation-based inference and recalibration for approximate Bayesian methods. Kengo Kamatani gave a general introduction to PDMPs, before explaining the automated implementation he considered with Charly Andral (during Charly’s visit to ISM, Tokyo, two summers ago). Towards accelerating the generation of the jump time. Then Luke Hardcastle applied PDMPs for survival prediction, using spike & slab priors and sticky PDMPs. And Jere Koskela (formerly Warwick) extended zig-zag sampling to discrete settings (incl. Kingman’s coalescent.)
The (rather long) day was not over yet since we had planned an extra on-site OWABI seminar & webinar with two participants in the conference, Filippo Pagani (Warwick and OCEAN postdoc) using fusion for federated learning, with a trapezoidal approximation, and Maurizio Filippone on GANs as hidden perfect ABC model selection, a GAN providing an automatic density estimator… With astounding Gemini-generated cartoons! Videos are soon to be available. A big congrats to the speakers who managed to convey their ideas and results despite the late hour! (On the extra-academic side, I was invited last night to a genuine Szechuan dinner in Chinatown, with a large array of spicy dishes if not that spicy!, and a rare opportunity to taste abalone. And bullfrogs. Quite a treat! And a good reason to skip dinner altogether!)

exceptional OWABI web/sem’inar [19 June, BayesComp²⁵]
Posted in pictures, Statistics, Travel, Uncategorized, University life with tags ABC model selection, Approximate Bayesian computation, approximate Bayesian inference, Bayesian GANs, Bayesian inference, distributed computing, fusion, intractable likelihood, One World Approximate Bayesian Inference Seminar, OWABI, self-normalised importance sampling, simulation, simulation-based inference, University of Warwick, webinar on June 10, 2025 by xi'an
Exceptionally, the next One World Approximate Bayesian Inference (OWABI) Seminar will be hybrid as it is scheduled to take place during BayesComp 2025 in Singapore, on Thursday 19 June at 8pm Singapore time (1pm in Tórshavn) and two talks, one by Filippo Pagani on
Bayesian Fusion is a powerful approach that enables distributed inference while maintaining exactness. However, the approach is computationally expensive. In this work, we propose a novel method that incorporates numerical approximations to alleviate the most computationally expensive steps, thereby achieving substantial reductions in runtime. Our approach retains the flexibility to approximate the target posterior distribution to an arbitrary degree of accuracy, and is scalable with respect to both the size of the dataset and the number of computational cores. Our method offers a practical and efficient alternative for large-scale Bayesian inference in distributed environments.
Generative Adversarial Networks (GANs) are popular models achieving impressive performance in various generative modeling tasks. In this work, we aim at explaining the undeniable success of GANs by interpreting them as probabilistic generative models. In this view, GANs transform a distribution over latent variables Z into a distribution over inputs X through a function parameterized by a neural network, which is usually referred to as the generator. This probabilistic interpretation enables us to cast the GAN adversarial-style optimization as a proxy for marginal likelihood optimization. More specifically, it is possible to show that marginal likelihood maximization with respect to model parameters is equivalent to the minimization of the Kullback-Leibler (KL) divergence between the true data generating distribution and the one modeled by the GAN. By replacing the KL divergence with other divergences and integral probability metrics we obtain popular variants of GANs such as f-GANs, Wasserstein-GANs, and Maximum Mean Discrepancy (MMD)-GANs. This connection has profound implications because of the desirable properties associated with marginal likelihood optimization, such as (i) lack of overfitting, which explains the success of GANs, and (ii) allowing for model selection, which opens to the possibility of obtaining parsimonious generators through architecture search.
These talks will be delivered on-site and on-line, as a Zoom visio-conference.
