On Distributed Larger-Than-Memory Subset Selection With Pairwise Submodular Functions
CoRR(2024)
摘要
Many learning problems hinge on the fundamental problem of subset selection,
i.e., identifying a subset of important and representative points. For example,
selecting the most significant samples in ML training cannot only reduce
training costs but also enhance model quality. Submodularity, a discrete
analogue of convexity, is commonly used for solving subset selection problems.
However, existing algorithms for optimizing submodular functions are
sequential, and the prior distributed methods require at least one central
machine to fit the target subset. In this paper, we relax the requirement of
having a central machine for the target subset by proposing a novel distributed
bounding algorithm with provable approximation guarantees. The algorithm
iteratively bounds the minimum and maximum utility values to select high
quality points and discard the unimportant ones. When bounding does not find
the complete subset, we use a multi-round, partition-based distributed greedy
algorithm to identify the remaining subset. We show that these algorithms find
high quality subsets on CIFAR-100 and ImageNet with marginal or no loss in
quality compared to centralized methods, and scale to a dataset with 13 billion
points.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要