TY - JOUR

T1 - Algorithmic stability for adaptive data analysis

AU - BASSILY, RAEF

AU - NISSIM, KOBBI

AU - SMITH, ADAM

AU - STEINKE, THOMAS

AU - STEMMER, URI

AU - ULLMAN, JONATHAN

N1 - Publisher Copyright: © 2021 Society for Industrial and Applied Mathematics Publications. All rights reserved.

PY - 2021/1/1

Y1 - 2021/1/1

N2 - Adaptivity is an important feature of data analysis-the choice of questions to ask about a dataset often depends on previous interactions with the same dataset. However, statistical validity is typically studied in a nonadaptive model, where all questions are specified before the dataset is drawn. Recent work by Dwork et al. [Proceedings of STOC, ACM, 2015, pp.117-126] and Hardt and Ullman [Proceedings of FOCS, IEEE, 2014, pp. 454-463] initiated the formal study of this problem and gave the first upper and lower bounds on the achievable generalization error for adaptive data analysis. Specifically, suppose there is an unknown distribution P and a set of n independent samples x is drawn from P. We seek an algorithm that, given x as input, accurately answers a sequence of adaptively chosen "queries"" about the unknown distribution P. How many samples n must we draw from the distribution, as a function of the type of queries, the number of queries, and the desired level of accuracy? In this work we make two new contributions toward resolving this question: 1. We give upper bounds on the number of samples n that are needed to answer statistical queries. The bounds improve and simplify the work of Dwork et al. and have been applied in subsequent work by those authors [Science, 349 (2015), pp. 636-638; Proceedings of NIPS, 2015, pp. 2350-2358]. 2. We prove the first upper bounds on the number of samples required to answer more general families of queries. These include arbitrary low-sensitivity queries and an important class of optimization queries (alternatively, risk minimization queries). As in Dwork et al., our algorithms are based on a connection with algorithmic stability in the form of differential privacy. We extend their work by giving a quantitatively optimal, more general, and simpler proof of their main theorem that stable algorithms of the kind guaranteed by differential privacy imply low generalization error. We also show that weaker stability guarantees such as bounded Kullback-Leibler divergence and total variation distance lead to correspondingly weaker generalization guarantees.

AB - Adaptivity is an important feature of data analysis-the choice of questions to ask about a dataset often depends on previous interactions with the same dataset. However, statistical validity is typically studied in a nonadaptive model, where all questions are specified before the dataset is drawn. Recent work by Dwork et al. [Proceedings of STOC, ACM, 2015, pp.117-126] and Hardt and Ullman [Proceedings of FOCS, IEEE, 2014, pp. 454-463] initiated the formal study of this problem and gave the first upper and lower bounds on the achievable generalization error for adaptive data analysis. Specifically, suppose there is an unknown distribution P and a set of n independent samples x is drawn from P. We seek an algorithm that, given x as input, accurately answers a sequence of adaptively chosen "queries"" about the unknown distribution P. How many samples n must we draw from the distribution, as a function of the type of queries, the number of queries, and the desired level of accuracy? In this work we make two new contributions toward resolving this question: 1. We give upper bounds on the number of samples n that are needed to answer statistical queries. The bounds improve and simplify the work of Dwork et al. and have been applied in subsequent work by those authors [Science, 349 (2015), pp. 636-638; Proceedings of NIPS, 2015, pp. 2350-2358]. 2. We prove the first upper bounds on the number of samples required to answer more general families of queries. These include arbitrary low-sensitivity queries and an important class of optimization queries (alternatively, risk minimization queries). As in Dwork et al., our algorithms are based on a connection with algorithmic stability in the form of differential privacy. We extend their work by giving a quantitatively optimal, more general, and simpler proof of their main theorem that stable algorithms of the kind guaranteed by differential privacy imply low generalization error. We also show that weaker stability guarantees such as bounded Kullback-Leibler divergence and total variation distance lead to correspondingly weaker generalization guarantees.

KW - Adaptive data analysis

KW - Algorithmic stability

KW - Differential privacy

KW - Statistical queries

UR - http://www.scopus.com/inward/record.url?scp=85108387439&partnerID=8YFLogxK

U2 - https://doi.org/10.1137/16M1103646

DO - https://doi.org/10.1137/16M1103646

M3 - مقالة

SN - 0097-5397

VL - 50

JO - SIAM Journal on Computing

JF - SIAM Journal on Computing

IS - 3

ER -