### Browsing by Author "Raykar, Vikas Chandrakant"

Now showing 1 - 4 of 4

###### Results Per Page

###### Sort Options

Item Fast Computation of Sums of Gaussians in High Dimensions(2005-11-15T19:27:37Z) Raykar, Vikas Chandrakant; Yang, Changjaing; Duraiswami, Ramani; Gumerov, NailEvaluating sums of multivariate Gaussian kernels is a key computational task in many problems in computational statistics and machine learning. The computational cost of the direct evaluation of such sums scales as the product of the number of kernel functions and the evaluation points. The fast Gauss transform proposed by Greengard and Strain (1991) is a $\epsilon$-exact approximation algorithm that reduces the computational complexity of the evaluation of the sum of $N$ Gaussians at $M$ points in $d$ dimensions from $\mathcal{O}(MN)$ to $\mathcal{O}(M+N)$. However, the constant factor in $\mathcal{O}(M+N)$ grows exponentially with increasing dimensionality $d$, which makes the algorithm impractical for dimensions greater than three. In this paper we present a new algorithm where the constant factor is reduced to asymptotically polynomial order. The reduction is based on a new multivariate Taylor's series expansion (which can act both as a local as well as a far field expansion) scheme combined with the efficient space subdivision using the $k$-center algorithm. The proposed method differs from the original fast Gauss transform in terms of a different factorization, efficient space subdivision, and the use of point-wise error bounds. Algorithm details, error bounds, procedure to choose the parameters and numerical experiments are presented. As an example we shows how the proposed method can be used for very fast $\epsilon$-exact multivariate kernel density estimation.Item POSITION CALIBRATION OF ACOUSTIC SENSORS AND ACTUATORS ON DISTRIBUTED GENERAL PURPOSE COMPUTING PLATFORMS(2003-12-04) Raykar, Vikas Chandrakant; Chellappa, Rama; Duraiswami, Ramani; Shamma, Shihab; Wu, Min; Electrical EngineeringAn algorithm is presented to automatically determine the relative 3D positions of audio sensors and actuators in an ad-hoc distributed network of heterogeneous general purpose computing platforms. A closed form approximate solution is derived, which is further refined by minimizing a non-linear error function. Our formulation and solution accounts for the lack of temporal synchronization among different platforms. We also derive an approximate expression for the mean and covariance of the implicitly defined estimator. The theoretical performance limits for the sensor positions are derived and analyzed with respect to the number of sensors and actuators as well as their geometry. We report extensive simulation results and discuss the practical details of implementing our algorithms.Item Scalable machine learning for massive datasets: Fast summation algorithms(2007-04-25) Raykar, Vikas Chandrakant; Duraiswami, Ramani; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Huge data sets containing millions of training examples with a large number of attributes are relatively easy to gather. However one of the bottlenecks for successful inference is the computational complexity of machine learning algorithms. Most state-of-the-art nonparametric machine learning algorithms have a computational complexity of either O(N^2) or O(N^3), where N is the number of training examples. This has seriously restricted the use of massive data sets. The bottleneck computational primitive at the heart of various algorithms is the multiplication of a structured matrix with a vector, which we refer to as matrix-vector product (MVP) primitive. The goal of my thesis is to speedup up some of these MVP primitives by fast approximate algorithms that scale as O(N) and also provide high accuracy guarantees. I use ideas from computational physics, scientific computing, and computational geometry to design these algorithms. The proposed algorithms have been applied to speedup kernel density estimation, optimal bandwidth estimation, projection pursuit, Gaussian process regression, implicit surface fitting, and ranking.Item Very fast optimal bandwidth selection for univariate kernel density estimation(2006-01-03T15:15:40Z) Raykar, Vikas Chandrakant; Duraiswami, RamaniMost automatic bandwidth selection procedures for kernel density estimates require estimation of quantities involving the density derivatives. Estimation of modes and inflexion points of densities also require derivative estimates. The computational complexity of evaluating the density derivative at M evaluation points given N sample points from the density is O(MN). In this paper we propose a computationally efficient $\epsilon$-exact approximation algorithm for univariate, Gaussian kernel based, density derivative estimation that reduces the computational complexity from O(MN) to linear order (O(N+M)). The constant depends on the desired arbitrary accuracy, $\epsilon$. We apply the density derivative evaluation procedure to estimate the optimal bandwidth for kernel density estimation, a process that is often intractable for large data sets. For example for N = M = 409,600 points while the direct evaluation of the density derivative takes around 12.76 hours the fast evaluation requires only 65 seconds with an error of around $10^{-12)$. Algorithm details, error bounds, procedure to choose the parameters and numerical experiments are presented. We demonstrate the speedup achieved on the bandwidth selection using the ``solve-the-equation plug-in method'' [18]. We also demonstrate that the proposed procedure can be extremely useful for speeding up exploratory projection pursuit techniques.