Menu

Distributed Markov chain Monte Carlo

calendar icon Jan 13, 2011 4102 views
video thumbnail
Pause
Mute
speed icon
speed icon
0.25
0.5
0.75
1
1.25
1.5
1.75
2

We consider the design of Markov chain Monte Carlo (MCMC) methods for large-scale, distributed, heterogeneous compute facilities, with a focus on synthesising sample sets across multiple runs performed in parallel. While theory suggests that many independent Markov chains may be run and their samples pooled, the well-known practical problem of quasi-ergodicity, or poor mixing, frustrates this otherwise simple approach. Furthermore, without some mechanism for hastening the convergence of individual chains, overall speedup from parallelism is limited by the portion of each chain to be discarded as burn-in. Existing multiple-chain methods, such as parallel tempering and population MCMC, use a synchronous exchange of samples to expedite convergence. This work instead proposes mixing in an additional independent proposal, representing some hitherto best estimate or summary of the posterior, and cooperatively adapting this across chains. Such adaptation can be asynchronous, increases the ensemble’s robustness to quasi-ergodic behaviour in constituent chains, and may improve overall tolerance to fault.

RELATED CATEGORIES

MORE VIDEOS FROM THE SAME CATEGORIES

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International license.