Improved convergence guarantees for learning Gaussian mixture models by EM and gradient EM

Nimrod Segol, Boaz Nadler

Research output: Contribution to journalArticlepeer-review

Abstract

We consider the problem of estimating the parameters a Gaussian Mixture Model with K components of known weights, all with an identity covariance matrix. We make two contributions. First, at the population level, we present a sharper analysis of the local convergence of EM and gradient EM, compared to previous works. Assuming a separation of Ω(√logK), we prove convergence of both methods to the global optima from an initialization region larger than those of previous works. Specifically, the initial guess of each component can be as far as (almost) half its distance to the nearest Gaussian. This is essentially the largest possible contraction region. Our second contribution are improved sample size requirements for accurate estimation by EM and gradient EM. In previous works, the required number of samples had a quadratic dependence on the maximal separation between the K components, and the resulting error estimate increased linearly with this maximal separation. In this manuscript we show that both quantities depend only logarithmically on the maximal separation.
Original languageEnglish
Pages (from-to)4510-4544
Number of pages35
JournalElectronic Journal of Statistics
Volume15
Issue number2
Early online date23 Sep 2021
DOIs
StatePublished - 2021

Fingerprint

Dive into the research topics of 'Improved convergence guarantees for learning Gaussian mixture models by EM and gradient EM'. Together they form a unique fingerprint.

Cite this