Menu

Variational Inference and Experimental Design for Sparse Linear Models

calendar icon Dec 18, 2008 4487 views
video thumbnail
Pause
Mute
speed icon
speed icon
0.25
0.5
0.75
1
1.25
1.5
1.75
2

Sparsity is a fundamental concept in modern statistics, and often the only general principle available at the moment to address novel learning ap- plications with many more variables than observations. Despite the recent advances of the theoretical understanding and the algorithmics of sparse point estimation, higher-order problems such as covariance estimation or optimal data acquisition are seldomly addressed for sparsity-favouring mod- els, and there are virtually no scalable algorithms. We provide an approximate Bayesian inference algorithm for sparse lin- ear models, that can be used with hundred thousands of variables. Our method employs a convex relaxation to variational inference and settles an open question in continuous Bayesian inference: The Gaussian lower bound relaxation is convex for a class of super-Gaussian potentials including the Laplace and Bernoulli potentials. Our algorithm reduces to the same computational primitives used for sparse estimation methods, but requires Gaussian marginal variance esti- mation as well. We show how the Lanczos algorithm from numerical math- ematics can be employed to compute the latter. We are interested in Bayesian experimental design, a powerful framework for optimizing measurement architectures. We have applied our framework to problems of magnetic resonance imaging design and reconstruction.

RELATED CATEGORIES

MORE VIDEOS FROM THE SAME CATEGORIES

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International license.