Adaptive Quantization of Model Updates for Communication-Efficient Federated Learning

Divyansh Jhunjhunwala, Advait Gadhikar, Gauri Joshi, Yonina C. Eldar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning, especially in bandwidth-limited settings and high-dimensional models. Gradient quantization is an effective way of reducing the number of bits required to communicate each model update, albeit at the cost of having a higher error floor due to the higher variance of the stochastic gradients. In this work, we propose an adaptive quantization strategy called AdaQuantFL that aims to achieve communication efficiency as well as a low error floor by changing the number of quantization levels during the course of training. Experiments on training deep neural networks show that our method can converge in much fewer communicated bits as compared to fixed quantization level setups, with little or no impact on training and test accuracy.
Original languageEnglish
Title of host publicationICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Pages3110-3114
Number of pages5
Volume2021-June
ISBN (Electronic)9781728176055
DOIs
StatePublished - 8 Feb 2021
EventIEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) - Toronto, ON, Canada
Duration: 6 Jun 202111 Jun 2021

Conference

ConferenceIEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Period6/06/2111/06/21

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Adaptive Quantization of Model Updates for Communication-Efficient Federated Learning'. Together they form a unique fingerprint.

Cite this