Abstract
Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning, especially in bandwidth-limited settings and high-dimensional models. Gradient quantization is an effective way of reducing the number of bits required to communicate each model update, albeit at the cost of having a higher error floor due to the higher variance of the stochastic gradients. In this work, we propose an adaptive quantization strategy called AdaQuantFL that aims to achieve communication efficiency as well as a low error floor by changing the number of quantization levels during the course of training. Experiments on training deep neural networks show that our method can converge in much fewer communicated bits as compared to fixed quantization level setups, with little or no impact on training and test accuracy.
Original language | English |
---|---|
Title of host publication | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
Pages | 3110-3114 |
Number of pages | 5 |
Volume | 2021-June |
ISBN (Electronic) | 9781728176055 |
DOIs | |
State | Published - 8 Feb 2021 |
Event | IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) - Toronto, ON, Canada Duration: 6 Jun 2021 → 11 Jun 2021 |
Conference
Conference | IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
---|---|
Period | 6/06/21 → 11/06/21 |
All Science Journal Classification (ASJC) codes
- Software
- Signal Processing
- Electrical and Electronic Engineering