2001-12
2019-04-11T01:58
research_article
279-299
Accelerating EM for Large Databases
https://scigraph.springernature.com/explorer/license/
en
true
http://link.springer.com/10.1023/A:1017986506241
articles
2001-12-01
The EM algorithm is a popular method for parameter estimation in a variety of problems involving missing data. However, the EM algorithm often requires significant computational resources and has been dismissed as impractical for large databases. We present two approaches that significantly reduce the computational cost of applying the EM algorithm to databases with a large number of cases, including databases with large dimensionality. Both approaches are based on partial E-steps for which we can use the results of Neal and Hinton (In Jordan, M. (Ed.), Learning in Graphical Models, pp. 355–371. The Netherlands: Kluwer Academic Publishers) to obtain the standard convergence guarantees of EM. The first approach is a version of the incremental EM algorithm, described in Neal and Hinton (1998), which cycles through data cases in blocks. The number of cases in each block dramatically effects the efficiency of the algorithm. We provide amethod for selecting a near optimal block size. The second approach, which we call lazy EM, will, at scheduled iterations, evaluate the significance of each data case and then proceed for several iterations actively using only the significant cases. We demonstrate that both methods can significantly reduce computational costs through their application to high-dimensional real-world and synthetic mixture modeling problems for large databases.
Information and Computing Sciences
Bo
Thiesson
Artificial Intelligence and Image Processing
45
10.1023/a:1017986506241
doi
Springer Nature - SN SciGraph project
Microsoft Research, One Microsoft Way, 98052, Redmond, WA, USA
Microsoft (United States)
Christopher
Meek
3
Heckerman
David
0885-6125
1573-0565
Machine Learning
5556fc13dfb48f3086620b936fd0d88d39848a040f2fa61f00b4c875787d04d5
readcube_id
pub.1017911237
dimensions_id