Fast Benchmarking of Asynchronous Multi-Fidelity Optimization on Zero-Cost Benchmarks
arxiv(2024)
摘要
While deep learning has celebrated many successes, its results often hinge on
the meticulous selection of hyperparameters (HPs). However, the time-consuming
nature of deep learning training makes HP optimization (HPO) a costly endeavor,
slowing down the development of efficient HPO tools. While zero-cost
benchmarks, which provide performance and runtime without actual training,
offer a solution for non-parallel setups, they fall short in parallel setups as
each worker must communicate its queried runtime to return its evaluation in
the exact order. This work addresses this challenge by introducing a
user-friendly Python package that facilitates efficient parallel HPO with
zero-cost benchmarks. Our approach calculates the exact return order based on
the information stored in file system, eliminating the need for long waiting
times and enabling much faster HPO evaluations. We first verify the correctness
of our approach through extensive testing and the experiments with 6 popular
HPO libraries show its applicability to diverse libraries and its ability to
achieve over 1000x speedup compared to a traditional approach. Our package can
be installed via pip install mfhpo-simulator.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要