Menu

Parallel Exact Inference on Multi-Core Processors

calendar icon Jan 19, 2010 4283 views
video thumbnail
Pause
Mute
speed icon
speed icon
0.25
0.5
0.75
1
1.25
1.5
1.75
2

Exact inference in Bayesian networks is a fundamental AI technique that has numerous applications including medical diagnosis, consumer help desk, pattern recognition, credit assessment, data mining, genetics, and others. Inference is NP-hard and in many applications real time performance is required. In this talk we show task and data parallel techniques to achieve scalable performance on general purpose multi-core and heterogeneous multi-core architectures. We develop collaborative schedulers to dynamically map the junction tree tasks leading to highly optimized implementations. We design lock-free structures to reduce thread coordination overheads in scheduling, while balancing the load across the threads. For the Cell BE, we develop a light weight centralized scheduler that coordinates the activities of the synergistic processing elements (SPEs). Our scheduler is further optimized to run on throughput oriented architectures such as SUN Niagara processors. We demonstrate scalable and efficient implementations using Pthreads for a wide class of Bayesian networks with various topologies, clique widths, and number of states of random variables. Our implementations show improved performance compared with OpenMP and complier based optimizations.

RELATED CATEGORIES

MORE VIDEOS FROM THE EVENT

MORE VIDEOS FROM THE SAME CATEGORIES

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International license.