Stochastic Approximation and Optimization for Markov Chains

dc.contributor.advisorMakowski, Armand M.en_US
dc.contributor.authorBartusek, John D.en_US
dc.contributor.departmentISRen_US
dc.date.accessioned2007-05-23T10:09:41Z
dc.date.available2007-05-23T10:09:41Z
dc.date.issued2000en_US
dc.description.abstractWe study the convergence properties of the projected stochasticapproximation (SA) algorithm which may be used to find the root of an unknown steady state function of a parameterized family of Markov chains. The analysis is based on the ODE Method and we develop a set of application-oriented conditions which imply almost sure convergence and are verifiable in terms of typically available model data. Specific results are obtained for geometrically ergodic Markov chains satisfying a uniform Foster-Lyapunov drift inequality.<p>Stochastic optimization is a direct application of the above root finding problem if the SA is driven by a gradient estimate of steady state performance. We study the convergence properties of an SA driven by agradient estimator which observes an increasing number of samples from the Markov chain at each step of the SA's recursion. To show almost sure convergence to the optimizer, a framework of verifiable conditions is introduced which builds on the general SA conditions proposed for the root finding problem.<p>We also consider a difficulty sometimes encountered in applicationswhen selecting the set used in the projection operator of the SA algorithm.Suppose there exists a well-behaved positive recurrent region of the state process parameter space where the convergence conditions are satisfied; this being the ideal set to project on. Unfortunately, the boundaries of this projection set are not known a priori when implementing the SA. Therefore, we consider the convergence properties when the projection set is chosen to include regions outside the well-behaved region. Specifically, we consider an SA applied to an M/M/1 which adjusts the service rate parameter when the projection set includes parameters that cause the queue to be transient.<p>Finally, we consider an alternative SA where the recursion is driven by a sample average of observations. We develop conditions implying convergence for this algorithm which are based on a uniform large deviation upper bound and we present specialized conditions implyingthis property for finite state Markov chains.en_US
dc.format.extent2017088 bytes
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/1903/6148
dc.language.isoen_USen_US
dc.relation.ispartofseriesISR; PhD 2000-5en_US
dc.subjectdigital communicationsen_US
dc.subjectfilteringen_US
dc.subjectnetwork managementen_US
dc.subjectqueueing networksen_US
dc.subjectsignal processingen_US
dc.subjectalgorithmsen_US
dc.subjectoptimizationen_US
dc.subjectdiscrete event dynamical systems DEDSen_US
dc.subjectstochastic approximationen_US
dc.subjectMarkov chainsen_US
dc.subjectIntelligent Signal Processing and Communications Systemsen_US
dc.titleStochastic Approximation and Optimization for Markov Chainsen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
PhD_2000-5.pdf
Size:
1.92 MB
Format:
Adobe Portable Document Format