Archive for webinar

OWABI⁷, 29 January 2026: Sequential Neural Score Estimation (11am UK time)

Posted in Books, Statistics, University life with tags , , , , , , , , , , on January 21, 2026 by xi'an

Speaker: Louis Sharrock (University College London)

Title: Sequential Neural Score Estimation: Likelihood-free inference with conditional score base diffusion models
Abstract: We introduce Sequential Neural Posterior Score Estimation (SNPSE), a score-based method for Bayesian inference in simulator-based models. Our method, inspired by the remarkable success of score-based methods in generative modelling, leverages conditional score-based diffusion models to generate samples from the posterior distribution of interest. The model is trained using an objective function which directly estimates the score of the posterior. We embed the model into a sequential training procedure, which guides simulations using the current approximation of the posterior at the observation of interest, thereby reducing the simulation cost. We also introduce several alternative sequential approaches, and discuss their relative merits. We then validate our method, as well as its amortised, non-sequential, variant on several numerical examples, demonstrating comparable or superior performance to existing state-of-the-art methods such as Sequential Neural Posterior Estimation (SNPE).
Keywords: diffusion models, simulation based inference, sequential methods.
Reference: L. Sharrock, J. Simons, S. Liu, M. Beaumont, Sequential Neural Score Estimation: Likelihood-Free Inference with Conditional Score Based Diffusion Models. PLMR, 235, 44565-44602, 2024.

Approximate Bayesian Computation with Statistical Distances for Model Selection [OWABI, 27 Nov]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , on November 17, 2025 by xi'an

The next OWABI seminar is delivered by Clara Grazian (University of Sidney), who will talk about “Approximate Bayesian Computation with Statistical Distances for Model Selection” on Thursday 27 November at 11am UK time:

Abstract: Model selection is a key task in statistics, playing a critical role across various scientific disciplines. While no model can fully capture the complexities of a real-world data-generating process, identifying the model that best approximates it can provide valuable insights. Bayesian statistics offers a flexible framework for model selection by updating prior beliefs as new data becomes available, allowing for ongoing refinement of candidate models. This is typically achieved by calculating posterior probabilities, which quantify the support for each model given the observed data. However, in cases where likelihood functions are intractable, exact computation of these posterior probabilities becomes infeasible. Approximate Bayesian computation (ABC) has emerged as a likelihood-free method and it is traditionally used with summary statistics to reduce data dimensionality, however this often results in information loss difficult to quantify, particularly in model selection contexts. Recent advancements propose the use of full data approaches based on statistical distances, offering a promising alternative that bypasses the need for handcrafted summary statistics and can yield posterior approximations that more closely reflect the true posterior under suitable conditions. Despite these developments, full data ABC approaches have not yet been widely applied to model selection problems. This paper seeks to address this gap by investigating the performance of ABC with statistical distances in model selection. Through simulation studies and an application to toad movement models, this work explores whether full data approaches can overcome the limitations of summary statistic-based ABC for model choice.
Keywords: model choice, distance metrics, full data approaches
Reference: C. Grazian, Approximate Bayesian Computation with Statistical Distances for Model Selection, preprint at ArXiv:2410.21603, 2025

OWABI Season VII

Posted in Statistics with tags , , , , , , , , , , , , , on October 17, 2025 by xi'an

A new season of the One World Approximate Bayesian Inference (OWABI) Seminar is about to start!
The 1st OWABI talk of the Season will be given by François-Xavier Briol (University College London). who will talk about “Multilevel neural simulation-based inferenceon Thursday the 30th October at 11am UK time.
Abstract
Neural simulation-based inference (SBI) is a popular set of methods for Bayesian inference when models are only available in the form of a simulator. These methods are widely used in the sciences and engineering, where writing down a likelihood can be significantly more challenging than constructing a simulator. However, the performance of neural SBI can suffer when simulators are computationally expensive, thereby limiting the number of simulations that can be performed. In this paper, we propose a novel approach to neural SBI which leverages multilevel Monte Carlo techniques for settings where several simulators of varying cost and fidelity are available. We demonstrate through both theoretical analysis and extensive experiments that our method can significantly enhance the accuracy of SBI methods given a fixed computational budget.
Keywords: Multifidelity, neural SBI, multi-level Monte Carlomultilevel Monte Carlo

a guest post from Julyan Arbel on ISBA on-line resources

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , on September 30, 2025 by xi'an
I would like to highlight two resources that, in my humble opinion as ISBA Social Media Manager, remain under-recognized yet immensely valuable:
ISBA Webinars on Bayesian Analysis Articles (2019–present).
This webpage gathers an exceptional collection of webinars discussing Bayesian Analysis articles since 2019. For anyone curious about the frontiers of Bayesian statistics, this series brings together cutting-edge research from world-class experts. The talks span topics such as model uncertainty and missing data, new perspectives on stick-breaking models, sparse Bayesian factor analysis, and advances in causal inference under model mis-specification. Other contributions cover nonparametric priors, spatio-temporal modeling of Arctic sea ice, Bayesian regression trees for causal inference, and much more.
The next BA webinar will focus on the paper “Model Uncertainty and Missing Data: An Objective Bayesian Perspective” by G. García-Donato, M. Eugenia Castellanos, S. Cabras, A. Quirós, and A. Forte. There will be four invited discussants: M. Clyde, M. Ferreira, A. Ly, and J. Rubio. It is scheduled for November 5, 2025 (4:00 PM UTC | 11:00 AM EST | 5:00 PM CET). Registration will be announced later on this webpage.
ISBA YouTube Channel.
All recorded webinar videos are available on the ISBA YouTube channel. Beyond the webinars, the channel hosts curated playlists from ISBA World Meetings (2012, 2016, 2018, 2021, 2022, 2024), specialized workshops and seminars (ABI, BNP, BayesComp), as well as content from ISBA Sections (j-ISBA, BNP, BioPharma, Industrial).
These resources deserve broad visibility. I warmly encourage you to explore them, share them within your networks, and let us know your feedback.
— Julyan, on behalf of the ISBA Social Media team

model uncertainty and missing data: an objective BAyesian perspective

Posted in Books, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on September 16, 2025 by xi'an

My Spanish and objective Bayesian friends Gonzalo García-Donato, María Eugenia Castellanos, Stefano Cabras, Alicia Quirós, and Anabel Forte wrote an fairly exciting paper in BA that is open to discussion (for a few more days), to be discussed on 05 November (4:00 PM UTC | 11:00 AM EST | 5:00 PM CET).

The interplay between missing data and model uncertainty—two classic statistical problems—leads to primary questions that we formally address from an objective Bayesian perspective. For the general regression problem, we discuss the probabilistic justification of Rubin’s rules applied to the usual components of Bayesian variable selection, arguing that prior predictive marginals should be central to the pursued methodology. In the regression settings, we explore the conditions of prior distributions that make the missing data mechanism ignorable, provided that it is missing at random or completely at random. Moreover, when comparing multiple linear models, we provide a complete methodology for dealing with special cases, such as variable selection or uncertainty regarding model errors. In numerous simulation experiments, we demonstrate that our method outperforms or equals others, in consistently producing results close to those obtained using the full dataset. In general, the difference increases with the percentage of missing data and the correlation between the variables used for imputation.

The so-called Rubin’s identity is simply the representation of the posterior probability of a model γ given the observed data x⁰, p(γ|x⁰), as the integrated posterior probability of a model given both observed and latent data,  p(γ|x⁰, x¹), against the marginal of latent x¹ given observed x⁰. Since this marginal involves the probabilities p(γ|x⁰), this representation is not directly useful for a numerical implementation.

In this paper, missingness relates to some entries of either the covariates or the response variate. Which is less common but more realistic, especially if some covariates do not contribute to the response. (The missingness mechanism does not matter if the data is missing at random (à la Rubin). The computational solution (p9) is rather standard, simulating the missing variables given the observed variables. In my opinion, the elephant in the room is the super-delicate selection of a prior distribution on the missing covariates, as methinks this impacts in a considerable manner the actual value of the Bayes factor, hence the selection of the surviving model. (As a side remark, we are credited in Celeux et al. (2006) to have “extended DIC for missing data models or when missing data were present”, but our point was instead to point out the arbitrariness of the very definition of DIC in such contexts.)

“The standard Bayesian method for addressing the absence of prior information uses improper distributions. In estimation problems (the model is fixed), the impropriety of priors does not imply any additional difficulty as long as the posterior is proper” (p9)

The authors point out the well-known difficulty with improper priors but still resort to improper priors on the parameters shared by all models—which I dispute as being adequate, despite the arguments put forward on p15, right Haar measure or not—, while sticking to proper priors on the model-dependent parameters. Which unsurprisingly become Zellner’s g-priors. Or rather g’-priors, although the discussion seems to resolve into the (model-free) factor g’ being equal to 1 as for the g-priors. Again a strong term in the derivation of the Bayes factor.