TY - GEN
T1 - Information sharing and privacy in networks
AU - Gradwohl, Ronen
N1 - Publisher Copyright: © 2017 ACM.
PY - 2017/6/20
Y1 - 2017/6/20
N2 - Users of social, economic, or medical networks share personal information in exchange for tangible benefits, but may be harmed by leakage and misuse of the shared information. This paper analyzes the effects of privacy enhancements on the tradeoffs faced by such privacy-concerned individuals. The main insights are that different privacy enhancements may have opposite effects on the volume of information sharing, and that although they always seem beneficial to non-strategic users, privacy enhancements may backfire when users are strategic. The observation that privacy regulation may be harmful is not new, and the burgeoning empirical and experimental literature on the topic has shown that the effects of regulation may be positive or negative, depending on the context [1]. The theoretical literature on privacy has its roots in the work of [3] and [4], who derive a similar conclusion in a signaling context: under stronger privacy regimes individuals can more readily hide negative traits, which may be harmful to other market participants and to social welfare. This paper's goal is to identify properties of interactions that determine the effects of various privacy regulations. The point of departure is the observation that the conception of privacy commonly studied in the theory literature|namely, as a technology for altering the signaling capabilities of individuals|misses a key dimension of privacy harm: A concern not about third parties' inferences about individuals' types, but rather about misuse of the information itself. identity theft, spam, harassment, stalking, re-identification, online tracking, excessive profiling, and targeted advertising care less about whether the information leaked about them is positive or negative, and more about the fact that information has been leaked and misused in the first place. We frame the paper within the context of online social networks, but describe its applicability in other contexts as well. In the model, agents are not concerned about what the leaked information signals about their type, but rather about the quantity of personal information leaked. As a preliminary formalization, consider a user of a social network who wishes to share some information. The user derives a benefit from sharing information on the platform, a benefit captured by an increasing function of the amount of information shared. However, there is some chance of information leakage and misuse, in which case the user incurs a cost, captured by another increasing function of the amount of information shared. The user thus faces a tradeoff between benefit and cost. Is a privacy enhancement, in the form of lowering , beneficial to the user? The answer is easily seen to be positive.
AB - Users of social, economic, or medical networks share personal information in exchange for tangible benefits, but may be harmed by leakage and misuse of the shared information. This paper analyzes the effects of privacy enhancements on the tradeoffs faced by such privacy-concerned individuals. The main insights are that different privacy enhancements may have opposite effects on the volume of information sharing, and that although they always seem beneficial to non-strategic users, privacy enhancements may backfire when users are strategic. The observation that privacy regulation may be harmful is not new, and the burgeoning empirical and experimental literature on the topic has shown that the effects of regulation may be positive or negative, depending on the context [1]. The theoretical literature on privacy has its roots in the work of [3] and [4], who derive a similar conclusion in a signaling context: under stronger privacy regimes individuals can more readily hide negative traits, which may be harmful to other market participants and to social welfare. This paper's goal is to identify properties of interactions that determine the effects of various privacy regulations. The point of departure is the observation that the conception of privacy commonly studied in the theory literature|namely, as a technology for altering the signaling capabilities of individuals|misses a key dimension of privacy harm: A concern not about third parties' inferences about individuals' types, but rather about misuse of the information itself. identity theft, spam, harassment, stalking, re-identification, online tracking, excessive profiling, and targeted advertising care less about whether the information leaked about them is positive or negative, and more about the fact that information has been leaked and misused in the first place. We frame the paper within the context of online social networks, but describe its applicability in other contexts as well. In the model, agents are not concerned about what the leaked information signals about their type, but rather about the quantity of personal information leaked. As a preliminary formalization, consider a user of a social network who wishes to share some information. The user derives a benefit from sharing information on the platform, a benefit captured by an increasing function of the amount of information shared. However, there is some chance of information leakage and misuse, in which case the user incurs a cost, captured by another increasing function of the amount of information shared. The user thus faces a tradeoff between benefit and cost. Is a privacy enhancement, in the form of lowering , beneficial to the user? The answer is easily seen to be positive.
UR - http://www.scopus.com/inward/record.url?scp=85025833809&partnerID=8YFLogxK
U2 - https://doi.org/10.1145/3033274.3085095
DO - https://doi.org/10.1145/3033274.3085095
M3 - Conference contribution
T3 - EC 2017 - Proceedings of the 2017 ACM Conference on Economics and Computation
SP - 349
EP - 350
BT - EC 2017 - Proceedings of the 2017 ACM Conference on Economics and Computation
T2 - 18th ACM Conference on Economics and Computation, EC 2017
Y2 - 26 June 2017 through 30 June 2017
ER -