Privacy-Preserving, Dropout-Resilient Aggregation in Decentralized Learning
arxiv(2024)
摘要
Decentralized learning (DL) offers a novel paradigm in machine learning by
distributing training across clients without central aggregation, enhancing
scalability and efficiency. However, DL's peer-to-peer model raises challenges
in protecting against inference attacks and privacy leaks. By forgoing central
bottlenecks, DL demands privacy-preserving aggregation methods to protect data
from 'honest but curious' clients and adversaries, maintaining network-wide
privacy. Privacy-preserving DL faces the additional hurdle of client dropout,
clients not submitting updates due to connectivity problems or unavailability,
further complicating aggregation.
This work proposes three secret sharing-based dropout resilience approaches
for privacy-preserving DL. Our study evaluates the efficiency, performance, and
accuracy of these protocols through experiments on datasets such as MNIST,
Fashion-MNIST, SVHN, and CIFAR-10. We compare our protocols with traditional
secret-sharing solutions across scenarios, including those with up to 1000
clients. Evaluations show that our protocols significantly outperform
conventional methods, especially in scenarios with up to 30
and model sizes of up to 10^6 parameters. Our approaches demonstrate markedly
high efficiency with larger models, higher dropout rates, and extensive client
networks, highlighting their effectiveness in enhancing decentralized learning
systems' privacy and dropout robustness.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要