TY - GEN
T1 - Pros and Cons of Weight Pruning for Out-of-Distribution Detection
T2 - 2023 International Joint Conference on Neural Networks, IJCNN 2023
AU - Koda, Satoru
AU - Zolfit, Alon
AU - Grolman, Edita
AU - Shabtai, Asaf
AU - Morikawa, Ikuya
AU - Elovici, Yuval
N1 - Publisher Copyright: © 2023 IEEE.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - Deep neural networks (DNNs) perform well on samples from the training distribution. However, DNNs deployed in the real world are exposed to out-of-distribution (OOD) samples, which refer to the samples from distributions that differ from the training distribution. OOD detection is indispensable to the DNNs as OOD samples can cause unexpected behaviors for them. This paper empirically explores the effectiveness of weight pruning of DNNs for OOD detection in a post-hoc setting (i.e., performing OOD detection based on pretrained DNN models). We conduct experiments on image, text, and tabular datasets to thoroughly evaluate OOD detection performance of weight-pruned DNNs. Our experimental results bring the following three novel findings: (i) Weight pruning improves OOD detection per-formance more significantly with a Mahalanobis distance-based detection approach, which performs OOD detection on DNN hidden representations using the Mahalanobis distance, than with logit-based detection approaches. (ii) Weight-pruned DNNs tend to extract global features of inputs, which improves the OOD detection on samples much dissimilar to the in-distribution samples. (iii) The weights that are useless for classification are often useful for OOD detection, and thus weight importance should not be quantified as the sensitivity of weights only to classification error. On the basis of these findings, we advocate practical techniques of DNN weight pruning that enable weight-pruned DNNs to maintain both OOD detection and classification capabilities.
AB - Deep neural networks (DNNs) perform well on samples from the training distribution. However, DNNs deployed in the real world are exposed to out-of-distribution (OOD) samples, which refer to the samples from distributions that differ from the training distribution. OOD detection is indispensable to the DNNs as OOD samples can cause unexpected behaviors for them. This paper empirically explores the effectiveness of weight pruning of DNNs for OOD detection in a post-hoc setting (i.e., performing OOD detection based on pretrained DNN models). We conduct experiments on image, text, and tabular datasets to thoroughly evaluate OOD detection performance of weight-pruned DNNs. Our experimental results bring the following three novel findings: (i) Weight pruning improves OOD detection per-formance more significantly with a Mahalanobis distance-based detection approach, which performs OOD detection on DNN hidden representations using the Mahalanobis distance, than with logit-based detection approaches. (ii) Weight-pruned DNNs tend to extract global features of inputs, which improves the OOD detection on samples much dissimilar to the in-distribution samples. (iii) The weights that are useless for classification are often useful for OOD detection, and thus weight importance should not be quantified as the sensitivity of weights only to classification error. On the basis of these findings, we advocate practical techniques of DNN weight pruning that enable weight-pruned DNNs to maintain both OOD detection and classification capabilities.
KW - Out-of-Distribution
KW - Weight Pruning
UR - http://www.scopus.com/inward/record.url?scp=85169594689&partnerID=8YFLogxK
U2 - https://doi.org/10.1109/IJCNN54540.2023.10191141
DO - https://doi.org/10.1109/IJCNN54540.2023.10191141
M3 - Conference contribution
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - IJCNN 2023 - International Joint Conference on Neural Networks, Proceedings
Y2 - 18 June 2023 through 23 June 2023
ER -