Friday, March 9, 2018
Lecture room B78
Institute for Exact Sciences
Sidlerstr. 5 CH-3012 Bern
"Safe Bayes" to "Safe Probability"
Peter Grünwald (CWI and Leiden University)
Bayesian inference can behave badly if the model under consideration is wrong yet useful: the posterior may fail to concentrate even for large samples, leading to extreme overfitting in practice. We demonstrate this on a simple regression problem. The problem goes away if we exponentiate the likelihood with the "right" "learning rate", which essentially amounts to making the prior more and the data less important. The resulting generalized posterior is guaranteed to concentrate around the distribution that is closest in KL divergence to the surmised truth. But what can one do with such a posterior? It may be safely used for some prediction tasks, yet be quite unsafe at others (for example, with a misspecified linear regression model, predictions for squared error loss are safe; predictions for absolute loss are not). We formalize this notion of 'safety' and propose a general theory of safe probability, which allows us to specify, for a given distribution, what inference tasks it can be used for, and what not, and is of interest from Bayesian and frequentist perspectives alike.