TY - GEN
T1 - Partial Model Pruning and Personalization for Wireless Federated Learning
AU - Liu, Xiaonan
AU - Ratnarajah, Tharmalingam
AU - Sellathurai, Mathini
AU - Eldar, Yonina C.
N1 - Publisher Copyright: © 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - To address the heterogeneity of devices' data and guarantee high computation and communication efficiency of federated learning (FL), we consider an FL framework with partial model pruning and personalization. This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device. It can adapt the model size during FL to reduce both computation and communication latency and increases the learning accuracy for devices with non-independent and identically distributed data. The computation and communication latency and convergence of the proposed FL framework are analyzed. To maximize the convergence rate and guarantee learning accuracy, Karush-Kuhn-Tucker (KKT) conditions are deployed to jointly optimize the pruning ratio and bandwidth allocation. Finally, experimental results demonstrate that the proposed FL framework achieves a remarkable reduction of approximately 50% computation and communication latency compared with FL with partial model personalization.
AB - To address the heterogeneity of devices' data and guarantee high computation and communication efficiency of federated learning (FL), we consider an FL framework with partial model pruning and personalization. This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device. It can adapt the model size during FL to reduce both computation and communication latency and increases the learning accuracy for devices with non-independent and identically distributed data. The computation and communication latency and convergence of the proposed FL framework are analyzed. To maximize the convergence rate and guarantee learning accuracy, Karush-Kuhn-Tucker (KKT) conditions are deployed to jointly optimize the pruning ratio and bandwidth allocation. Finally, experimental results demonstrate that the proposed FL framework achieves a remarkable reduction of approximately 50% computation and communication latency compared with FL with partial model personalization.
KW - Partial model pruning and personalization
KW - communication and computation latency
KW - federated learning
UR - http://www.scopus.com/inward/record.url?scp=85207046791&partnerID=8YFLogxK
U2 - 10.1109/SPAWC60668.2024.10694371
DO - 10.1109/SPAWC60668.2024.10694371
M3 - منشور من مؤتمر
T3 - IEEE Workshop on Signal Processing Advances in Wireless Communications, SPAWC
SP - 31
EP - 35
BT - 2024 IEEE 25th International Workshop on Signal Processing Advances in Wireless Communications, SPAWC 2024
T2 - 25th IEEE International Workshop on Signal Processing Advances in Wireless Communications, SPAWC 2024
Y2 - 10 September 2024 through 13 September 2024
ER -