Reaching Data Confidentiality and Model Accountability on the CalTrain

2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)(2019)

引用 14|浏览173
暂无评分
摘要
Distributed collaborative learning (DCL) paradigms enable building joint machine learning models from distrusted multi-party participants. Data confidentiality is guaranteed by retaining private training data on each participant's local infrastructure. However, this approach makes today's DCL design fundamentally vulnerable to data poisoning and backdoor attacks. It limits DCL's model accountability, which is key to backtracking problematic training data instances and their responsible contributors. In this paper, we introduce CALTRAIN, a centralized collaborative learning system that simultaneously achieves data confidentiality and model accountability. CALTRAIN enforces isolated computation via secure enclaves on centrally aggregated training data to guarantee data confidentiality. To support building accountable learning models, we securely maintain the links between training instances and their contributors. Our evaluation shows that the models generated by CALTRAIN can achieve the same prediction accuracy when compared to the models trained in non-protected environments. We also demonstrate that when malicious training participants tend to implant backdoors during model training, CALTRAIN can accurately and precisely discover the poisoned or mislabeled training data that lead to the runtime mispredictions.
更多
查看译文
关键词
Data Privacy,Learning Systems,Systems Security
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要