Debiased Distribution Compression
CoRR(2024)
摘要
Modern compression methods can summarize a target distribution ℙ
more succinctly than i.i.d. sampling but require access to a low-bias input
sequence like a Markov chain converging quickly to ℙ. We introduce a
new suite of compression methods suitable for compression with biased input
sequences. Given n points targeting the wrong distribution and quadratic
time, Stein Kernel Thinning (SKT) returns √(n) equal-weighted points with
O(n^-1/2) maximum mean discrepancy (MMD) to ℙ. For
larger-scale compression tasks, Low-rank SKT achieves the same feat in
sub-quadratic time using an adaptive low-rank debiasing procedure that may be
of independent interest. For downstream tasks that support simplex or
constant-preserving weights, Stein Recombination and Stein Cholesky achieve
even greater parsimony, matching the guarantees of SKT with as few as
poly-log(n) weighted points. Underlying these advances are new
guarantees for the quality of simplex-weighted coresets, the spectral decay of
kernel matrices, and the covering numbers of Stein kernel Hilbert spaces. In
our experiments, our techniques provide succinct and accurate posterior
summaries while overcoming biases due to burn-in, approximate Markov chain
Monte Carlo, and tempering.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要