OPTIMIZATION UNDER STOCHASTIC ENVIRONMENT

dc.contributor.advisorFu, Michael Cen_US
dc.contributor.authorLi, Yunchuanen_US
dc.contributor.departmentElectrical Engineeringen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2020-10-10T05:33:44Z
dc.date.available2020-10-10T05:33:44Z
dc.date.issued2020en_US
dc.description.abstractStochastic optimization (SO) is extensively studied in various fields, such as control engineering, operations research, and computer science. It has found wide applications ranging from path planning (civil engineering) and tool-life testing (industrial engineering) to Go-playing artificial intelligence (computer science). However, SO is usually a hard problem primarily because of the added complexity from random variables. The objective of this research is to investigate three types of SO problems: single-stage SO, multi-stage SO and fast real-time parameter estimation under stochastic environment.\par We first study the single-stage optimization problem. We propose Direct Gradient Augmented Response Surface Methodology (DiGARSM), a new sequential first-order method for optimizing a stochastic function. In this approach, gradients of the objective function with respect to the desired parameters are utilized in addition to response measurements. We intend to establish convergence of the proposed method, as well as traditional approaches which do not use gradients. We expect an improvement in convergence speed with the added derivative information. \par Second, we analyze a tree search problem with an underlying Markov decision process. Unlike traditional tree search algorithms where the goal is to maximize the cumulative reward in the learning process, the proposed method aims at identifying the best action at the root that achieves the highest reward. A new tree algorithm based on ranking and selection is proposed. The selection policy at each node aims at maximizing the probability of correctly selecting the best action. \par The third topic is motivated by problems arising in neuroscience, specifically, a Maximum Likelihood (ML) parameter estimation of linear models with noise-corrupted observations. We developed an optimization algorithm designed for non-convex, linear state-space model parameter estimation. The ML estimation is carried out by the Expectation-Maximization algorithm, which iteratively updates parameter estimates based on the previous estimates. Since the likelihood surface is in general non-convex, a model-based global optimization method called Model Reference Adaptive Search (MRAS) is applied.en_US
dc.identifierhttps://doi.org/10.13016/jfhk-sugx
dc.identifier.urihttp://hdl.handle.net/1903/26594
dc.language.isoenen_US
dc.subject.pqcontrolledElectrical engineeringen_US
dc.subject.pquncontrolledMarkov decision processen_US
dc.subject.pquncontrolledMaximum likihood estimationen_US
dc.subject.pquncontrolledStochastic approximationen_US
dc.subject.pquncontrolledStochastic optimizationen_US
dc.titleOPTIMIZATION UNDER STOCHASTIC ENVIRONMENTen_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Li_umd_0117E_21066.pdf
Size:
1.93 MB
Format:
Adobe Portable Document Format