Abstract
Background: Recently, an extensive amount of research has been focused on compressing and accelerating Deep Neural Networks (DNN). So far, high compression rate algorithms require part of the training dataset for a low precision calibration, or a fine-tuning process. However, this requirement is unacceptable when the data is unavailable or contains sensitive information, as in medical and biometric use-cases. Contributions: We present three methods for generating synthetic samples from trained models. Then, we demonstrate how these samples can be used to calibrate and fine-tune quantized models without using any real data in the process. Our best performing method has a negligible accuracy degradation compared to the original training set. This method, which leverages intrinsic batch normalization layers' statistics of the trained model, can be used to evaluate data similarity. Our approach opens a path towards genuine data-free model compression, alleviating the need for training data during model deployment.
Original language | English |
---|---|
Title of host publication | 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 |
Pages | 8491-8499 |
Number of pages | 9 |
DOIs | |
State | Published - 2020 |
Event | 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 - Virtual, Online, United States Duration: 14 Jun 2020 → 19 Jun 2020 |
Conference
Conference | 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 |
---|---|
Country/Territory | United States |
City | Virtual, Online |
Period | 14/06/20 → 19/06/20 |
All Science Journal Classification (ASJC) codes
- Software
- Computer Vision and Pattern Recognition