This lecture series takes a deeper dive into the Markov chain Monte Carlo (MCMC) methods introduced in Professor Gael Martin’s course. The output from MCMC is a sequence of N samples from a Markov chain that we hope will asymptotically target the posterior, but how do we use this chain to estimate quantities of interest in practice? We will cover three key aspects of this post-processing: diagnosing convergence, assessing sample quality and using the chain to estimate quantities of interest. The first lecture will cover popular methods and their limitations under modern challenges in Bayesian statistics. The second lecture will introduce modern alternatives which have recently been developed for scalable MCMC.
1. Standard methods for post-processing MCMC
This lecture will introduce popular methods for post-processing of MCMC. Diagnostics such as the Gelman and Rubin convergence diagnostic will be introduced as tools to assess convergence and control bias via removal of an initial burn-in period. Approaches to using the chain for estimation, ranging from vanilla averaging of samples to control variates, will be described.
We will finish the lecture by describing a popular MCMC method for big data where standard post-processing methods break down.
2. Modern methods for post-processing MCMC
This lecture will introduce a series of post-processing tools based on Stein’s method. This includes Stein goodness of fit tests, Stein thinning and a general-purpose gradient-based control variate. These methods can be used in standard MCMC, in MCMC for big data and in the context of challenging applications where practitioners wish to trade-off bias and variance in their estimates.
We will implement these methods with the support of R packages for applications in standard MCMC and “biased” MCMC for big data. This will give practical tools for your own implementation of MCMC methods.
Professor Gael Martin’s course at the beginning of the AMSI Winter School will give you the perfect background on the topic. Anyone familiar with the basics of MCMC and Bayes’ theorem will be well positioned for this class.
Dr Leah South is a lecturer in statistics at Queensland University of Technology (QUT), an associate investigator of the QUT Centre for Data Science and an associate investigator of the Australian Research Council Centre of Excellence in Mathematical and Statistical Frontiers. She is a board member for the Bayesian computation section of the International Society for Bayesian Analysis (ISBA) and a committee member in the Bayesian section of the Statistical Society of Australia. After being awarded her PhD in 2019, Leah worked as a senior research associate in scalable Monte Carlo methods and subsequently began her role as a lecturer at QUT in June, 2020.
Leah’s research interests are in Bayesian computational statistics, including Markov chain Monte Carlo (MCMC), sequential Monte Carlo and approximate Bayesian computation. Leah is passionate about her lecture series topic of post-processing of MCMC and has recently cowritten a review paper on the topic for the Annual Review of Statistics and Its Application. She is co-organising an online conference on measuring the quality of MCMC output on behalf of ISBA.