These two lectures will introduce students to the key foundational, methodological and computational principles underlying the Bayesian statistical paradigm. We will begin with a brief history of Bayesian statistics, and outline the contrast between the Bayesian and frequentist (or classical) paradigms. We will then look at all three aspects of Bayesian inference: inference about unknown parameters, inference about unknown models and prediction of unknown future (or out-of-sample) values. The core computational challenge associated with the implementation of ‘Bayes’ – the evaluation of posterior expectations – will be highlighted, and the three main categories of computational method briefly described: 1) deterministic integration methods; 2) (exact) simulation methods (e.g. Markov chain Monte Carlo (MCMC)); and 3) approximate methods (e.g. approximate Bayesian computation; variational Bayes). This will then set the scene for the later lectures, in which more detailed expositions of modern computational techniques will be provided.
An additional ‘tutorial’ will be conducted in which we will run a couple of R programs that implement (and ‘diagnose’) various simple MCMC algorithms.
The lectures will be self-contained and assume only a third-year undergraduate level understanding of the principles of mathematical statistics (including likelihood functions and frequentist sampling properties). No prior knowledge of Bayesian statistics will be assumed.
For those interested in undertaking some preliminary reading however, I suggest one of the following textbooks:
1. Geweke, J. (2005) Contemporary Bayesian Econometrics and Statistics, Wiley.
2. Greenburg, E. (2008) An Introduction to Bayesian Econometrics, Cambridge University Press.
3. Koop, G. (2004) Bayesian Econometrics, Wiley.
4. O’Hagan, A. and Forster, J. (2004) Kendall’s Advanced Theory of Statistics: Volume on Bayesian Inference (Volume 2B). Arnold.
5. Robert, C.P. (1994) The Bayesian Choice: A Decision-Theoretic Motivation, Springer-Verlag.
6. Robert, C.P. and Casella, G. (2004) Monte Carlo Statistical Methods, 2nd ed. Springer-Verlag.
I also recommend reading an historical review paper that I have recently co-authored with David Frazier and Christian Robert: “Computing Bayes: Bayesian Computation from 1763 to the 21st Century” https://arxiv.org/abs/2004.06425
This paper has been written with young researchers in mind, in particular those new to the field. Hence, it has deliberately been written in an accessible style. It also has a very large number of references for you to follow up on when needed, as you proceed through the various stages of the Winter School.
Gael Martin is a Professor of Econometrics and PhD Director in the Department of Econometrics and Business Statistics at Monash University, and Fellow of the Academy of Social Sciences Australia. She was an Australian Research Council Future Fellow from 2010 to 2013. Her primary research interests have been in developing statistical methods for complex dynamic models. The development, application and validation of Bayesian simulation-based methods is central to her research, with recent contributions made to the burgeoning field of approximate Bayesian computation. Her interest is not just in methods of inference and computation, but also prediction, including the impact of inferential technique, and modelling assumptions, on predictive accuracy. Recent work explores the effect of making predictions with a model that does not accord with reality, and proposes a new ‘loss-based’ paradigm that delivers accurate predictions when the predictive model is wrong.
She is currently an Associate Editor of Journal of Applied Econometrics, International Journal of Forecasting (IJF) and Econometrics and Statistics, and was a guest editor for a special issue of IJF on Bayesian Forecasting in Economics.
All published work and current projects can be found at: http://users.monash.edu.au/~gmartin/