Kimad: Adaptive Gradient Compression with Bandwidth Awareness

Jihao Xin, Ivan Ilin, Shunkang Zhang,Marco Canini,Peter Richtárik

DistributedML@CoNEXT(2023)

引用 0|浏览11
暂无评分
摘要
In distributed training, communication often emerges as a bottleneck. In response, we introduce Kimad, a solution that offers adaptive gradient compression. By consistently monitoring bandwidth, Kimad refines compression ratios to match specific neural network layer requirements. Our exhaustive tests and proofs confirm Kimad's outstanding performance, establishing it as a benchmark in adaptive compression for distributed deep learning.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要