Variational Inference via Rényi Upper-Lower Bound Optimization.

ICMLA(2022)

引用 0|浏览2
暂无评分
摘要
Variational inference provides a way to approximate probability densities. It does so by optimizing an upper or a lower bound on the likelihood of the observed data (the evidence). The classic variational inference approach suggests to maximize the Evidence Lower BOund (ELBO). Recent proposals suggest to optimize the variational Rényi bound (VR) and χ upper bound. However, these estimates are either biased or difficult to approximate, due to a high variance.In this paper we introduce a new upper bound (termed VRLU) which is based on the existing variational Rényi bound. In contrast to the existing VR bound, the Monte Carlo (MC) approximation of the VRLU bound is unbiased. Furthermore, we devise a (sandwiched) upper-lower bound variational inference method (termed VRS) to jointly optimize the upper and lower bounds. We present a set of experiments, designed to evaluate the new VRLU bound, and to compare the VRS method with the classic VAE and the VR methods over a set of digit recognition tasks. The experiments and results demonstrate the VRLU bound advantage, and the wide applicability of the VRS method.
更多
查看译文
关键词
Variational Autoencoder,Rényi Divergence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要