STOCHASTIC OPTIMIZATION: APPROXIMATE BAYESIAN INFERENCE AND COMPLETE EXPECTED IMPROVEMENT

dc.contributor.advisorRyzhov, Ilyaen_US
dc.contributor.advisorSmith, Paulen_US
dc.contributor.authorChen, Yeen_US
dc.contributor.departmentMathematicsen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2018-09-07T05:39:20Z
dc.date.available2018-09-07T05:39:20Z
dc.date.issued2018en_US
dc.description.abstractStochastic optimization includes modeling, computing and decision making. In practice, due to the limitation of mathematical tools or real budget, many practical solution methods are designed using approximation techniques or taking forms that are efficient to compute and update. These models have shown their practical benefits in different backgrounds, but many of them also lack rigorous theoretical support. Through interfacing with statistical tools, we analyze the asymptotic properties of two important Bayesian models and show their validity by proving consistency or other limiting results, which may be useful to algorithmic scientists seeking to leverage these computational techniques for their practical performance. The first part of the thesis is the consistency analysis of sequential learning algorithms under approximate Bayesian inference. Approximate Bayesian inference is a powerful methodology for constructing computationally efficient statistical mechanisms for sequential learning from incomplete or censored information.Approximate Bayesian learning models have proven successful in a variety of operations research and business problems; however, prior work in this area has been primarily computational, and the consistency of approximate Bayesian estimators has been a largely open problem. We develop a new consistency theory by interpreting approximate Bayesian inference as a form of stochastic approximation (SA) with an additional “bias” term. We prove the convergence of a general SA algorithm of this form, and leverage this analysis to derive the first consistency proofs for a suite of approximate Bayesian models from the recent literature. The second part of the thesis proposes a budget allocation algorithm for the ranking and selection problem. The ranking and selection problem is a well-known mathematical framework for the formal study of optimal information collection. Expected improvement (EI) is a leading algorithmic approach to this problem; the practical benefits of EI have repeatedly been demonstrated in the literature, especially in the widely studied setting of Gaussian sampling distributions. However, it was recently proved that some of the most well-known EI-type methods achieve sub- optimal convergence rates. We investigate a recently-proposed variant of EI (known as “complete EI”) and prove that, with some minor modifications, it can be made to converge to the rate-optimal static budget allocation without requiring any tuning.en_US
dc.identifierhttps://doi.org/10.13016/M2B56D796
dc.identifier.urihttp://hdl.handle.net/1903/21150
dc.language.isoenen_US
dc.subject.pqcontrolledStatisticsen_US
dc.subject.pqcontrolledOperations researchen_US
dc.titleSTOCHASTIC OPTIMIZATION: APPROXIMATE BAYESIAN INFERENCE AND COMPLETE EXPECTED IMPROVEMENTen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Chen_umd_0117E_19352.pdf
Size:
1.1 MB
Format:
Adobe Portable Document Format