Competitive strategies to use "warm start" algorithms with predictions

Vaidehi Srinivas,Avrim Blum

arxiv(2024)

引用 0|浏览2
暂无评分
摘要
We consider the problem of learning and using predictions for warm start algorithms with predictions. In this setting, an algorithm is given an instance of a problem, and a prediction of the solution. The runtime of the algorithm is bounded by the distance from the predicted solution to the true solution of the instance. Previous work has shown that when instances are drawn iid from some distribution, it is possible to learn an approximately optimal fixed prediction (Dinitz et al, NeurIPS 2021), and in the adversarial online case, it is possible to compete with the best fixed prediction in hindsight (Khodak et al, NeurIPS 2022). In this work we give competitive guarantees against stronger benchmarks that consider a set of k predictions 𝐏. That is, the "optimal offline cost" to solve an instance with respect to 𝐏 is the distance from the true solution to the closest member of 𝐏. This is analogous to the k-medians objective function. In the distributional setting, we show a simple strategy that incurs cost that is at most an O(k) factor worse than the optimal offline cost. We then show a way to leverage learnable coarse information, in the form of partitions of the instance space into groups of "similar" instances, that allows us to potentially avoid this O(k) factor. Finally, we consider an online version of the problem, where we compete against offline strategies that are allowed to maintain a moving set of k predictions or "trajectories," and are charged for how much the predictions move. We give an algorithm that does at most O(k^4 ln^2 k) times as much work as any offline strategy of k trajectories. This algorithm is deterministic (robust to an adaptive adversary), and oblivious to the setting of k. Thus the guarantee holds for all k simultaneously.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要