LazyDP: Co-Designing Algorithm-Software for Scalable Training of Differentially Private Recommendation Models
Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2(2024)
摘要
Differential privacy (DP) is widely being employed in the industry as a
practical standard for privacy protection. While private training of computer
vision or natural language processing applications has been studied
extensively, the computational challenges of training of recommender systems
(RecSys) with DP have not been explored. In this work, we first present our
detailed characterization of private RecSys training using DP-SGD, root-causing
its several performance bottlenecks. Specifically, we identify DP-SGD's noise
sampling and noisy gradient update stage to suffer from a severe compute and
memory bandwidth limitation, respectively, causing significant performance
overhead in training private RecSys. Based on these findings, we propose
LazyDP, an algorithm-software co-design that addresses the compute and memory
challenges of training RecSys with DP-SGD. Compared to a state-of-the-art
DP-SGD training system, we demonstrate that LazyDP provides an average 119x
training throughput improvement while also ensuring mathematically equivalent,
differentially private RecSys models to be trained.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要