On Safety in Safe Bayesian Optimization
CoRR(2024)
摘要
Optimizing an unknown function under safety constraints is a central task in
robotics, biomedical engineering, and many other disciplines, and increasingly
safe Bayesian Optimization (BO) is used for this. Due to the safety critical
nature of these applications, it is of utmost importance that theoretical
safety guarantees for these algorithms translate into the real world. In this
work, we investigate three safety-related issues of the popular class of
SafeOpt-type algorithms. First, these algorithms critically rely on frequentist
uncertainty bounds for Gaussian Process (GP) regression, but concrete
implementations typically utilize heuristics that invalidate all safety
guarantees. We provide a detailed analysis of this problem and introduce
Real-e̱ṯa̱-SafeOpt, a variant of the SafeOpt algorithm that leverages recent
GP bounds and thus retains all theoretical guarantees. Second, we identify
assuming an upper bound on the reproducing kernel Hilbert space (RKHS) norm of
the target function, a key technical assumption in SafeOpt-like algorithms, as
a central obstacle to real-world usage. To overcome this challenge, we
introduce the Lipschitz-only Safe Bayesian Optimization (LoSBO) algorithm,
which guarantees safety without an assumption on the RKHS bound, and
empirically show that this algorithm is not only safe, but also exhibits
superior performance compared to the state-of-the-art on several function
classes. Third, SafeOpt and derived algorithms rely on a discrete search space,
making them difficult to apply to higher-dimensional problems. To widen the
applicability of these algorithms, we introduce Lipschitz-only GP-UCB
(LoS-GP-UCB), a variant of LoSBO applicable to moderately high-dimensional
problems, while retaining safety.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要