A Quasi-Monte Carlo Data Structure for Smooth Kernel Evaluations
CoRR(2024)
摘要
In the kernel density estimation (KDE) problem one is given a kernel K(x,
y) and a dataset P of points in a Euclidean space, and must prepare a data
structure that can quickly answer density queries: given a point q, output a
(1+ϵ)-approximation to μ:=1/|P|∑_p∈ P K(p, q). The
classical approach to KDE is the celebrated fast multipole method of [Greengard
and Rokhlin]. The fast multipole method combines a basic space partitioning
approach with a multidimensional Taylor expansion, which yields a ≈log^d (n/ϵ) query time (exponential in the dimension d). A recent
line of work initiated by [Charikar and Siminelakis] achieved polynomial
dependence on d via a combination of random sampling and randomized space
partitioning, with [Backurs et al.] giving an efficient data structure with
query time ≈polylog(1/μ)/ϵ^2 for smooth kernels.
Quadratic dependence on ϵ, inherent to the sampling methods, is
prohibitively expensive for small ϵ. This issue is addressed by
quasi-Monte Carlo methods in numerical analysis. The high level idea in
quasi-Monte Carlo methods is to replace random sampling with a discrepancy
based approach – an idea recently applied to coresets for KDE by [Phillips and
Tai]. The work of Phillips and Tai gives a space efficient data structure with
query complexity ≈ 1/(ϵμ). This is polynomially better in
1/ϵ, but exponentially worse in 1/μ. We achieve the best of both:
a data structure with ≈polylog(1/μ)/ϵ query time
for smooth kernel KDE. Our main insight is a new way to combine discrepancy
theory with randomized space partitioning inspired by, but significantly more
efficient than, that of the fast multipole methods. We hope that our techniques
will find further applications to linear algebra for kernel matrices.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要