Federated Learning Privacy: Attacks, Defenses, Applications, and Policy Landscape - A Survey
arxiv(2024)
摘要
Deep learning has shown incredible potential across a vast array of tasks and
accompanying this growth has been an insatiable appetite for data. However, a
large amount of data needed for enabling deep learning is stored on personal
devices and recent concerns on privacy have further highlighted challenges for
accessing such data. As a result, federated learning (FL) has emerged as an
important privacy-preserving technology enabling collaborative training of
machine learning models without the need to send the raw, potentially
sensitive, data to a central server. However, the fundamental premise that
sending model updates to a server is privacy-preserving only holds if the
updates cannot be "reverse engineered" to infer information about the private
training data. It has been shown under a wide variety of settings that this
premise for privacy does not hold.
In this survey paper, we provide a comprehensive literature review of the
different privacy attacks and defense methods in FL. We identify the current
limitations of these attacks and highlight the settings in which FL client
privacy can be broken. We dissect some of the successful industry applications
of FL and draw lessons for future successful adoption. We survey the emerging
landscape of privacy regulation for FL. We conclude with future directions for
taking FL toward the cherished goal of generating accurate models while
preserving the privacy of the data from its participants.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要