OPTIMIZATION UNDER STOCHASTIC ENVIRONMENT
Publication or External Link
Stochastic optimization (SO) is extensively studied in various fields, such as control engineering, operations research, and computer science. It has found wide applications ranging from path planning (civil engineering) and tool-life testing (industrial engineering) to Go-playing artificial intelligence (computer science). However, SO is usually a hard problem primarily because of the added complexity from random variables. The objective of this research is to investigate three types of SO problems: single-stage SO, multi-stage SO and fast real-time parameter estimation under stochastic environment.\par
We first study the single-stage optimization problem. We propose Direct Gradient Augmented Response Surface Methodology (DiGARSM), a new sequential first-order method for optimizing a stochastic function. In this approach, gradients of the objective function with respect to the desired parameters are utilized in addition to response measurements. We intend to establish convergence of the proposed method, as well as traditional approaches which do not use gradients. We expect an improvement in convergence speed with the added derivative information. \par
Second, we analyze a tree search problem with an underlying Markov decision process. Unlike traditional tree search algorithms where the goal is to maximize the cumulative reward in the learning process, the proposed method aims at identifying the best action at the root that achieves the highest reward. A new tree algorithm based on ranking and selection is proposed. The selection policy at each node aims at maximizing the probability of correctly selecting the best action. \par
The third topic is motivated by problems arising in neuroscience, specifically, a Maximum Likelihood (ML) parameter estimation of linear models with noise-corrupted observations. We developed an optimization algorithm designed for non-convex, linear state-space model parameter estimation. The ML estimation is carried out by the Expectation-Maximization algorithm, which iteratively updates parameter estimates based on the previous estimates. Since the likelihood surface is in general non-convex, a model-based global optimization method called Model Reference Adaptive Search (MRAS) is applied.