Optimal Differentially Private Learning of Thresholds and Quasi-Concave Optimization

PROCEEDINGS OF THE 55TH ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING, STOC 2023(2023)

引用 7|浏览87
暂无评分
摘要
The problem of learning threshold functions is a fundamental one in machine learning. Classical learning theory implies sample complexity of O (xi(-1) log(1/beta)) (for generalization error b with confidence 1 - beta). The private version of the problem, however, is more challenging and in particular, the sample complexity must depend on the size |X| of the domain. Progress on quantifying this dependence, via lower and upper bounds, was made in a line of works over the past decade. In this paper, we finally close the gap for approximate-DP and provide a nearly tight upper bound of (O) over tilde (log* |X|), which matches a lower bound by Alon et al (that applies even with improper learning) and improves over a prior upper bound of (O) over tilde (( log* |X|)(1.5)) by Kaplan et al. We also provide matching upper and lower bounds of (Theta) over tilde (2log* |X|) for the additive error of private quasi-concave optimization (a related and more general problem). Our improvement is achieved via the novel Reorder-SliceCompute paradigm for private data analysis which we believe will have further applications.
更多
查看译文
关键词
differential privacy,PAC learning,threshold functions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要