Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning

IEEE-ACM TRANSACTIONS ON NETWORKING(2024)

引用 2|浏览20
暂无评分
摘要
Federated learning (FL) requires frequent uploading and updating of model parameters, which is naturally vulnerable to gradient leakage attacks (GLAs) that reconstruct private training data through gradients. Although some works incorporate differential privacy (DP) into FL to mitigate such privacy issues, their performance is not satisfactory since they did not notice that GLA incurs heterogeneous risks of privacy leakage (RoPL) with respect to gradients from different communication rounds and clients. In this paper, we propose an Adaptive Privacy-Preserving Federated Learning (Adp-PPFL) framework to achieve satisfactory privacy protection against GLA, while ensuring good performance in terms of model accuracy and convergence speed. Specifically, a leakage risk-aware privacy decomposition mechanism is proposed to provide adaptive privacy protection to different communication rounds and clients by dynamically allocating the privacy budget according to the quantified RoPL. In particular, we exploratively design a round-level and a client-level RoPL quantification method to measure the possible risks of GLA breaking privacy from gradients in different communication rounds and clients respectively, which only employ the limited information in general FL settings. Furthermore, to improve the FL model training performance (i.e., convergence speed and global model accuracy), we propose an adaptive privacy-preserving local training mechanism that dynamically clips the gradients and decays the noises added to the clipped gradients during the local training process. Extensive experiments show that our framework outperforms the existing differentially private FL schemes on model accuracy, convergence, and attack resistance.
更多
查看译文
关键词
Federated learning,data privacy,gradient leakage attack,differential privacy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要