Closed-Form Bounds for DP-SGD against Record-level Inference
CoRR(2024)
摘要
Machine learning models trained with differentially-private (DP) algorithms
such as DP-SGD enjoy resilience against a wide range of privacy attacks.
Although it is possible to derive bounds for some attacks based solely on an
(ε,δ)-DP guarantee, meaningful bounds require a small enough
privacy budget (i.e., injecting a large amount of noise), which results in a
large loss in utility. This paper presents a new approach to evaluate the
privacy of machine learning models against specific record-level threats, such
as membership and attribute inference, without the indirection through DP. We
focus on the popular DP-SGD algorithm, and derive simple closed-form bounds.
Our proofs model DP-SGD as an information theoretic channel whose inputs are
the secrets that an attacker wants to infer (e.g., membership of a data record)
and whose outputs are the intermediate model parameters produced by iterative
optimization. We obtain bounds for membership inference that match
state-of-the-art techniques, whilst being orders of magnitude faster to
compute. Additionally, we present a novel data-dependent bound against
attribute inference. Our results provide a direct, interpretable, and practical
way to evaluate the privacy of trained models against specific inference
threats without sacrificing utility.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要