Bayesian

I was born in Tunbridge Wells where the Reverend Thomas Bayes spent much of his life. Unlike Bayes I am not a minister of the church, although I do have a religious-like fervour for all things Bayesian.

If you don’t know, Bayesian statistics is an analytical framework increasingly used in research in the social, physical and biological sciences. Specifically Bayesian statistics uses the language of probability to express our uncertainty about scientific hypotheses. A ‘hypothesis’ is an explanation about the state of the world that may be true or false; to ensure that a hypothesis is ‘scientific’ it must be testable with data. Accordingly, Bayesian statistics uses data we collect from the world to update our view on scientific hypotheses.

As part of my fascination with Bayesian statistics, I wrote a textbook, ‘A Student’s Guide to Bayesian Statistics’, which was published by Sage in May 2018, and is now available to order on Amazon.

In a collaboration with Michael Clerx, Martin Robinson, Sanmitra Ghosh, Chon Lei, Gary Mirams and Dave Gavaghan at the University of Oxford and the University of Nottingham, we are working to develop a robust software (called Pints) that allows efficient optimisation and sampling for difficult time series models in the physical and biological sciences. A paper is currently being written to introduce Pints and detail a number of computational experiments which we have run using this software.

I also collaborate with Simon Tavener at Colorado State University (and Dave Gavaghan, again) to develop a probabilistic framework to undertake inverse sensitivity analysis in deterministic systems. A paper is forthcoming here that introduces a sampling methodology that allows inverse sensitivity analyses to be conducted on models of significantly higher dimension than was previously possible.

I also teach a course on Bayesian statistics for PhD students the University of Oxford. The course page with information on the syllabus is here.