Show simple item record

Fast Computation of Sums of Gaussians in High Dimensions

dc.contributor.authorRaykar, Vikas Chandrakant
dc.contributor.authorYang, Changjaing
dc.contributor.authorDuraiswami, Ramani
dc.contributor.authorGumerov, Nail
dc.description.abstractEvaluating sums of multivariate Gaussian kernels is a key computational task in many problems in computational statistics and machine learning. The computational cost of the direct evaluation of such sums scales as the product of the number of kernel functions and the evaluation points. The fast Gauss transform proposed by Greengard and Strain (1991) is a $\epsilon$-exact approximation algorithm that reduces the computational complexity of the evaluation of the sum of $N$ Gaussians at $M$ points in $d$ dimensions from $\mathcal{O}(MN)$ to $\mathcal{O}(M+N)$. However, the constant factor in $\mathcal{O}(M+N)$ grows exponentially with increasing dimensionality $d$, which makes the algorithm impractical for dimensions greater than three. In this paper we present a new algorithm where the constant factor is reduced to asymptotically polynomial order. The reduction is based on a new multivariate Taylor's series expansion (which can act both as a local as well as a far field expansion) scheme combined with the efficient space subdivision using the $k$-center algorithm. The proposed method differs from the original fast Gauss transform in terms of a different factorization, efficient space subdivision, and the use of point-wise error bounds. Algorithm details, error bounds, procedure to choose the parameters and numerical experiments are presented. As an example we shows how the proposed method can be used for very fast $\epsilon$-exact multivariate kernel density estimation.en
dc.format.extent2413407 bytes
dc.relation.ispartofseriesUM Computer Science Departmenten
dc.titleFast Computation of Sums of Gaussians in High Dimensionsen
dc.typeTechnical Reporten

Files in this item


This item appears in the following Collection(s)

Show simple item record