Communication complexity of distributed convex learning and optimization

Research output: Contribution to journalConference articlepeer-review

Abstract

We study the fundamental limits to communication-efficient distributed methods for convex learning and optimization, under different assumptions on the information available to individual machines, and the types of functions considered. We identify cases where existing algorithms are already worst-case optimal, as well as cases where room for further improvement is still possible. Among other things, our results indicate that without similarity between the local objective functions (due to statistical data similarity or otherwise) many communication rounds may be required, even if the machines have unbounded computational power.

Original languageEnglish
Pages (from-to)1756-1764
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume2015-January
DOIs
StatePublished - 7 Dec 2015
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: 7 Dec 201512 Dec 2015

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Communication complexity of distributed convex learning and optimization'. Together they form a unique fingerprint.

Cite this