Menu

Suboptimality of MDL and Bayes in Classification under Misspecification

calendar icon Feb 25, 2007 3231 views
split view icon
video icon
presentation icon
video with chapters icon
video thumbnail
Pause
Mute
speed icon
speed icon
0.25
0.5
0.75
1
1.25
1.5
1.75
2

We show that forms of Bayesian and MDL learning that are often applied to classification problems can be *statistically inconsistent*. We present a large family of classifiers and a distribution such that the best classifier within the model has generalization error (expected 0/1-prediction loss) almost 0. Nevertheless, no matter how many data are observed, both the classifier inferred by MDL and the classifier based on the Bayesian posterior will behave much worse than this best classifier in the sense that their expected 0/1-prediction loss is substantially larger. Our result can be re-interpreted as showing that under misspecification, Bayes and MDL do not always converge to the distribution in the model that is closest in KL divergence to the data generating distribution. We compare this result with earlier results on Bayesian inconsistency by Diaconis, Freedman and Barron.

RELATED CATEGORIES

MORE VIDEOS FROM THE SAME CATEGORIES

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International license.