Sparsifying Distributed Algorithms with Ramifications in Massively Parallel Computation and Centralized Local Computation

SODA '19: Symposium on Discrete Algorithms San Diego California January, 2019(2018)

引用 106|浏览403
暂无评分
摘要
We introduce a method for sparsifying distributed algorithms and exhibit how it leads to improvements that go past known barriers in two algorithmic settings of large-scale graph processing: Massively Parallel Computation (MPC), and Local Computation Algorithms (LCA). - MPC with Strongly Sublinear Memory: Recently, there has been growing interest in obtaining MPC algorithms that are faster than their classic O(log n)-round parallel counterparts for problems such as MIS, Maximal Matching, 2-Approximation of Minimum Vertex Cover, and (1+ϵ)-Approximation of Maximum Matching. Currently, all such MPC algorithms require Ω̃(n) memory per machine. Czumaj et al. [STOC'18] were the first to handle Ω̃(n) memory, running in O((loglog n)^2) rounds. We obtain Õ(√(logΔ))-round MPC algorithms for all these four problems that work even when each machine has memory n^α for any constant α∈ (0, 1). Here, Δ denotes the maximum degree. These are the first sublogarithmic-time algorithms for these problems that break the linear memory barrier. - LCAs with Query Complexity Below the Parnas-Ron Paradigm: Currently, the best known LCA for MIS has query complexity Δ^O(logΔ) poly(log n), by Ghaffari [SODA'16]. As pointed out by Rubinfeld, obtaining a query complexity of poly(Δlog n) remains a central open question. Ghaffari's bound almost reaches a Δ^Ω(logΔ/loglogΔ) barrier common to all known MIS LCAs, which simulate distributed algorithms by learning the local topology, à la Parnas-Ron [TCS'07]. This barrier follows from the Ω(logΔ/loglogΔ) distributed lower bound of Kuhn, et al. [JACM'16]. We break this barrier and obtain an MIS LCA with query complexity Δ^O(loglogΔ) poly(log n).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要