Browsing by Author "Dey, Subhrakanti"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Combined Compression and Classification with Learning Vector Quantization(1998) Baras, John S.; Dey, Subhrakanti; ISRCombined compression and classification problems are becoming increasinglyimportant in many applications with large amounts of sensory data andlarge sets of classes. These applications range from aided target recognition(ATR), to medicaldiagnosis, to speech recognition, to fault detection and identificationin manufacturing systems. In this paper, we develop and analyze a learningvector quantization-based (LVQ) algorithm for the combined compressionand classification problem. We show convergence of the algorithm usingtechniques from stochastic approximation, namely, the ODE method. Weillustrate the performance of our algorithm with some examples.Item Discrete-Time Risk-Sensitive Filters with Non-Gaussian Initial Conditions and their Ergodic Properties(1998) Dey, Subhrakanti; Charalambous, Charalambos D.; ISRIn this paper, we study asymptotic stability properties ofrisk-sensitive filters with respect to their initial conditions. In particular, we consider a linear time-invariant system with initial conditionsthat are not necessarily Gaussian. We show that in the case of Gaussianinitial conditions, the optimal risk-sensitive filter asymptoticallyconverges to any suboptimal filter initialized with an incorrect covariancematrix for the initial state vector in the mean square sense provided the incorrect initializing value for the covariance matrix results in arisk-sensitive filter that is asymptotically stable (that is, resultsin a solution for a Riccati equation that is asymptoticallystabilizing). For non-Gaussian initial conditions, we derive theexpression for the risk-sensitive filter in terms of a finite number ofparameters. Under a boundedness assumption satisfied by thefourth order moments of the initial state variable and a slow growthcondition satified by a certainRadon-Nikodym derivative, we show that a suboptimal risk-sensitive filterinitialized with Gaussian initial conditions asymptotically approachesthe optimal risk-sensitive filter for non-Gaussian initial conditions inthe mean square sense.The research and scientific content in this material has been submitted to the 1999 American Control Conference, San Diego, June 1999. Item A Framework for Mixed Estimation of Hidden Markov Models(1998) Dey, Subhrakanti; Marcus, Steven I.; ISRIn this paper, we present a framework for a mixed estimationscheme for hidden Markov models (HMM).A robust estimation scheme is first presented using the minimax method thatminimizes a worst case cost for HMMs with bounded uncertainties.Then we present a mixed estimation scheme that minimizes arisk-neutral cost with a constraint on the worst-case cost. Somesimulation results are also presented to compare these different estimationschemes in cases of uncertainties in the noise model.The research and scientific content in this material has been accepted for presentation in the 37th IEEE Conference on Decision and Control, Tampa, December 1998. Item Stochastic Average Consensus Filter for Distributed HMM Filtering: Almost Sure Convergence(2010-05-03) Ghasemi, Nader; Dey, Subhrakanti; Baras, John S.; Baras, John S.This paper studies almost sure convergence of a dynamic average consensus algorithm which allows distributed computation of the product of $n$ time-varying conditional probability density functions, known as beliefs, corresponding to $n$ different nodes within a sensor network. The network topology is modeled as an undirected graph. The average consensus algorithm is used in a distributed hidden Markov model (HMM) filter. We use the ordinary differential equation (ODE) technique to analyze the convergence of the stochastic approximation type algorithm for average consensus with constant step size which allows each node to track the time varying average of the likelihood of the beliefs belong to different nodes in the network. It is shown that, for a connected graph, under mild assumptions on the first and second moments of the observation probability distributions and a geometric ergodicity condition on an extended Markov chain, the consensus filter state of each individual sensor converges ${\mathbb{P}\mbox{--a.s. }}$ to the true average of the likelihood of the beliefs of all the sensors. In order to prove convergence, we introduce a perturbed stochastic Lyapunov function to show that the error between the consensus filter state at each node and the true average visits some compact set infinitely often ${\mathbb{P}\mbox{--w.p.}1}$ and from that it is shown that the error process is bounded ${\mathbb{P}\mbox{--w.p.}1}$.