Particle Filtering for Stochastic Control and Global Optimization

dc.contributor.advisorMarcus, Steven I.en_US
dc.contributor.authorZhou, Enluen_US
dc.contributor.departmentElectrical Engineeringen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2009-10-06T06:01:11Z
dc.date.available2009-10-06T06:01:11Z
dc.date.issued2009en_US
dc.description.abstractThis thesis explores new algorithms and results in stochastic control and global optimization through the use of particle filtering. Stochastic control and global optimization are two areas that have many applications but are often difficult to solve. In stochastic control, an important class of problems, namely, partially observable Markov decision processes (POMDPs), provides an ideal paradigm to model discrete-time sequential decision making under uncertainty and partial observation. However, POMDPs usually do not admit analytical solutions, and are computationally very expensive to solve most of the time. While many efficient numerical algorithms have been developed for finite-state POMDPs, there are only a few proposed for continuous-state POMDPs, and even more sparse are relevant analytical results regarding convergence and error bounds. From the modeling viewpoint, many application problems are modeled more naturally by continuous-state POMDPs rather than finite-state POMDPs. Therefore, one part of the thesis is devoted to developing a new efficient algorithm for continuous-state POMDPs and studying the performance of the algorithm both analytically and numerically. Based on the idea of density projection with particle filtering, the proposed algorithm reduces the infinite-dimensional problem to a finite-low-dimensional one, and also has the flexibility and scalability for better approximation if given more computational power. Error bounds are proved for the algorithm, and numerical experiments are carried out on an inventory control problem. In global optimization, many problems are very difficult to solve due to the presence of multiple local optima or badly scaled objective functions. Many approximate solutions methods have been developed and studied. Among them, a recent class of simulation-based methods share the common characteristic of repeatedly drawing candidate solutions from an intermediate probability distribution and then updating the distribution using these candidate solutions, until the probability distribution becomes concentrated on the optimal solution. The efficiency and accuracy of these algorithms depend very much on the choice of the intermediate probability distributions and the updating schemes. Using a novel interpretation of particle filtering, these algorithms are unified under one framework, and hence, many new insights are revealed. By better understanding these existing algorithms, the framework also holds the promise for developing new improved algorithms. Some directions for new improved algorithms are proposed, and numerical experiments are carried out on a few benchmark problems.en_US
dc.format.extent1709703 bytes
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/1903/9535
dc.language.isoen_US
dc.subject.pqcontrolledEngineering, Electronics and Electricalen_US
dc.subject.pqcontrolledEngineering, Industrialen_US
dc.subject.pquncontrolledGlobal Optimizationen_US
dc.subject.pquncontrolledPartially Observable Markov Decision Processesen_US
dc.subject.pquncontrolledParticle Filteringen_US
dc.subject.pquncontrolledSimulation-Based Optimizationen_US
dc.titleParticle Filtering for Stochastic Control and Global Optimizationen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Zhou_umd_0117E_10557.pdf
Size:
1.63 MB
Format:
Adobe Portable Document Format