posted on 1998-11-01, 00:00authored byJoseph E Gonzalez, Yucheng Low, Carlos Guestrin, David R. O'Hallaron
As computer clusters become more common and
the size of the problems encountered in the field
of AI grows, there is an increasing demand for
efficient parallel inference algorithms. We consider
the problem of parallel inference on large
factor graphs in the distributed memory setting
of computer clusters. We develop a new efficient
parallel inference algorithm, DBRSplash,
which incorporates over-segmented graph partitioning,
belief residual scheduling, and uniform
work Splash operations. We empirically evaluate
the DBRSplash algorithm on a 120 processor
cluster and demonstrate linear to super-linear
performance gains on large factor graph models.