Efficient Language Model Architectures for Differentially Private Federated Learning
CoRR(2024)
摘要
Cross-device federated learning (FL) is a technique that trains a model on
data distributed across typically millions of edge devices without data leaving
the devices. SGD is the standard client optimizer for on device training in
cross-device FL, favored for its memory and computational efficiency. However,
in centralized training of neural language models, adaptive optimizers are
preferred as they offer improved stability and performance. In light of this,
we ask if language models can be modified such that they can be efficiently
trained with SGD client optimizers and answer this affirmatively.
We propose a scale-invariant Coupled Input Forget Gate (SI CIFG) recurrent
network by modifying the sigmoid and tanh activations in the recurrent cell and
show that this new model converges faster and achieves better utility than the
standard CIFG recurrent model in cross-device FL in large scale experiments. We
further show that the proposed scale invariant modification also helps in
federated learning of larger transformer models. Finally, we demonstrate the
scale invariant modification is also compatible with other non-adaptive
algorithms. Particularly, our results suggest an improved privacy utility
trade-off in federated learning with differential privacy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要