TY - GEN

T1 - Communication-efficient algorithms for distributed stochastic principal component analysis

AU - Garber, Dan

AU - Shamir, Ohad

AU - Srebro, Nathan

N1 - Publisher Copyright: Copyright 2017 by the author(s).

PY - 2017

Y1 - 2017

N2 - We study the fundamental problem of Principal Component Analysis in a statistical distributed setting in which each machine out of m stores a sample of n points sampled i.i.d. From a single unknown distribution. We study algorithms for estimating the leading principal component of the population covariance matrix that are both communication-efficient and achieve estimation error of the order of the centralized ERM solution that uses all mn samples. On the negative side, wc show that in contrast to results obtained for distributed estimation under convexity assumptions, for the PCA objective, simply averaging the local ERM solutions cannot guar-antee error that is consistent with the centralized ERM. We show that this unfortunate phenomena can be remedied by performing a simple correction step which correlates between the individual solutions, and provides an estimator that is consistent with the centralized ERM for sufficiently-large n. We also introduce an iterative distributed algorithm that is applicable in any regime of n, which is based on distributed matrix-vector products. The algorithm gives significant acceleration in terms of communication rounds over previous distributed algorithms, in a wide regime of parameters.

AB - We study the fundamental problem of Principal Component Analysis in a statistical distributed setting in which each machine out of m stores a sample of n points sampled i.i.d. From a single unknown distribution. We study algorithms for estimating the leading principal component of the population covariance matrix that are both communication-efficient and achieve estimation error of the order of the centralized ERM solution that uses all mn samples. On the negative side, wc show that in contrast to results obtained for distributed estimation under convexity assumptions, for the PCA objective, simply averaging the local ERM solutions cannot guar-antee error that is consistent with the centralized ERM. We show that this unfortunate phenomena can be remedied by performing a simple correction step which correlates between the individual solutions, and provides an estimator that is consistent with the centralized ERM for sufficiently-large n. We also introduce an iterative distributed algorithm that is applicable in any regime of n, which is based on distributed matrix-vector products. The algorithm gives significant acceleration in terms of communication rounds over previous distributed algorithms, in a wide regime of parameters.

UR - http://www.scopus.com/inward/record.url?scp=85048431764&partnerID=8YFLogxK

M3 - منشور من مؤتمر

T3 - 34th International Conference on Machine Learning, ICML 2017

SP - 1943

EP - 1964

BT - 34th International Conference on Machine Learning, ICML 2017

T2 - 34th International Conference on Machine Learning, ICML 2017

Y2 - 6 August 2017 through 11 August 2017

ER -