Carnegie Mellon University
Improving the Performance and Understanding of the Expectation Ma.pdf (5.65 MB)

Improving the Performance and Understanding of the Expectation Maximization Algorithm: Evolutionary and Visualization Methods

Download (5.65 MB)
posted on 2016-08-01, 00:00 authored by Priya Krishnan Sundararajan

The Expectation Maximization (EM) algorithm is a method for learning the parameters of probabilistic graphical models when there is hidden or missing data. The goal of an EM algorithm is to estimate a set of parameters that maximizes the likelihood of the data. In spite of its success in practice, the EM algorithm has several limitations, including slow convergence, computational complexity, and inability to escape local maxima. Using multiple random starting points is a popular approach to mitigate the local maxima problem, but this method is time consuming. This work seeks to improve the understanding and performance of the EM algorithm. We combine evolutionary algorithms, which make use of stochastic search, with the multiple random starting points strategy for the EM algorithm. First, we propose a genetic algorithm for expectation maximization (GAEM), where we combine the global search property of genetic algorithms (GAs) and the local search property of EM. We investigate how different choices of population sizes, crossover and mutation probabilities, and selection techniques affect the solution quality. We found that small population sizes are sufficient to produce high solution quality and considerable speed-up compared to the traditional EM algorithm. Second, we develop an age-layered EM algorithm (ALEM), where we incorporate an age-layered population structure heuristic in which age is the number of iterations of an EM run. We focus on speeding up the EM algorithm for Bayesian networks. ALEM enables comparisons between similarly aged EM runs and discards less promising EM runs well before their convergence. Experimentally, we find that ALEM can significantly reduce the average number of iterations with no or minimal degradation in solution quality. Finally, we introduce an intuitive graphical user interface (GUI) to visualize and analyze graphs including Bayesian networks. In particular, the user can perform multi-focus zooming wherein he or she can compare multiple nodes in an overview graphical window and study their parameters in detail windows. For EM learning, this GUI helps to understand the progress of the estimated probability parameters.




Degree Type

  • Dissertation


  • Electrical and Computer Engineering

Degree Name

  • Doctor of Philosophy (PhD)


Ole Mengshoel

Usage metrics


    Ref. manager