TY - GEN
T1 - Information Theoretic Private Inference in Quantized Models
AU - Raviv, Netanel
AU - Bitar, Rawad
AU - Yaakobi, Eitan
N1 - Publisher Copyright: © 2022 IEEE.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - In a Private Inference scenario, a server holds a model (e.g., a neural network), a user holds data, and the user wishes to apply the model on her data. The privacy of both parties must be protected; the user's data might contain confidential information, and the server's model is his intellectual property.Private inference has been studied extensively in recent years, mostly from a cryptographic perspective by incorporating homo-morphic encryption and multiparty computation protocols, which incur high computational overhead and degrade the accuracy of the model. In this work we take a perpendicular approach which draws inspiration from the expansive Private Information Retrieval literature. We view private inference as the task of retrieving an inner product of a parameter vector with the data, a fundamental step in most machine learning models.By combining binary arithmetic with real-valued one, we present a scheme which enables the retrieval of the inner product for models whose weights are either binarized, or given in fixed-point representation; such models gained increased attention recently, due to their ease of implementation and increased robustness. We also present a fundamental trade-off between the privacy of the user and that of the server, and show that our scheme is optimal in this sense. Our scheme is simple, universal to a large family of models, provides clear information-theoretic guarantees to both parties with zero accuracy loss, and in addition, is compatible with continuous data distributions and allows infinite precision.
AB - In a Private Inference scenario, a server holds a model (e.g., a neural network), a user holds data, and the user wishes to apply the model on her data. The privacy of both parties must be protected; the user's data might contain confidential information, and the server's model is his intellectual property.Private inference has been studied extensively in recent years, mostly from a cryptographic perspective by incorporating homo-morphic encryption and multiparty computation protocols, which incur high computational overhead and degrade the accuracy of the model. In this work we take a perpendicular approach which draws inspiration from the expansive Private Information Retrieval literature. We view private inference as the task of retrieving an inner product of a parameter vector with the data, a fundamental step in most machine learning models.By combining binary arithmetic with real-valued one, we present a scheme which enables the retrieval of the inner product for models whose weights are either binarized, or given in fixed-point representation; such models gained increased attention recently, due to their ease of implementation and increased robustness. We also present a fundamental trade-off between the privacy of the user and that of the server, and show that our scheme is optimal in this sense. Our scheme is simple, universal to a large family of models, provides clear information-theoretic guarantees to both parties with zero accuracy loss, and in addition, is compatible with continuous data distributions and allows infinite precision.
KW - Private computation
KW - Private inference
KW - Private information retrieval
UR - http://www.scopus.com/inward/record.url?scp=85136287509&partnerID=8YFLogxK
U2 - 10.1109/ISIT50566.2022.9834464
DO - 10.1109/ISIT50566.2022.9834464
M3 - منشور من مؤتمر
T3 - IEEE International Symposium on Information Theory - Proceedings
SP - 1641
EP - 1646
BT - 2022 IEEE International Symposium on Information Theory, ISIT 2022
T2 - 2022 IEEE International Symposium on Information Theory, ISIT 2022
Y2 - 26 June 2022 through 1 July 2022
ER -